Hacker News new | comments | ask | show | jobs | submit login
Lisp and Haskell (2017) (markkarpov.com)
200 points by sridca 7 months ago | hide | past | web | favorite | 151 comments

My problem with Haskell is that it is not pragmatic and the reason I stopped using it. The philosophy of separating pure and impure code – and to enforce it – does more harm than good. Often its difficult to understand what your "forth level of abstraction code" above the usual abstraction levels really does. Haskell is fun to learn. Haskell has a lot of interesting concepts to explore and it will make you a better programmer. Haskell code can be really dense. And Haskell code can be totally unreadable, using concepts which take weeks to learn – which is... not so great.

Lisp (Common Lisp) is what you get when you reduce the rules and syntax to a minimum and try at the same time to be a maximal flexible programming language. This simplicity and applied pragmatism is what makes it so great. Like the authors in the Lisp books warned, after learning Lisp, coding won't be the same. Now I see everywhere code that could easily be written in Lisp, but is not so great in language XYZ. All those SDL's and stringified API's and code-generators they need to do real things and data formats like JSON... all obsolete, if they would have chosen Lisp (or a language with brackets and Lisp like evaluation rules).

The other thing that is often overlooked regarding Lisp is, that in a Lisp like language there is no need to wait for a new version of the programming language to be released to get new syntax – you can simply add it yourself (or have a library doing it for you). E.g. a lot of the JavaScript mess could be fixed by code transformation, if they would have chosen a Lisp like syntax in the first place.

I don't see Haskell as uber language. If I want super correct code, speed and I'm okay with using a more complex language, then there is Rust – which is probably more correct than Haskell and faster, too.

Haskell, Lisp and C++ would really benefit form a std library 2.0 and a default package manager... one can dream.

The ability to come up with new syntax in LISP is in my opinion vastly overrated.

Yes, LISP code can be easily parsed for as long as you stick to lists. However the ability to do anything complicated with LISP code is simply not there because LISP code loses type info, which is extremely important for anything you’d want to do with code, like all kinds of non-superficial transformations and refactorings.

So here are some facts given as examples:

(1) LISP code does not make the job of IDE authors easier; just ask the authors themselves

(2) in terms of macros, anything more complicated than lazy evaluation of thunks and kiddie examples is out of reach; for example .NET’s LINQ cannot be expressed in LISP, not unless you add a static type system, but at that point you’re no longer talking about LISP ;-)

In my opinion LISP has been vastly romanticized, yet it doesn’t live up to expectations. It falls short in every department, from functional programming, to available tooling, to the great enlightenment people are supposed to experience once they learn it.

Whereas Haskell really does live up to expectations, being one of those very few ecosystems that does make developers better, at this point being the lingua franca of FP.

> in terms of macros, anything more complicated than lazy evaluation of thunks and kiddie examples is out of reach;

I wrote a compile-time test framework in 7 lines of Gambit Scheme code https://github.com/billsix/bug/blob/master/README.md

Clojure has implementations of CSP, Minikanren and pattern matching all done with macros. I'm not sure how you can call these toy examples

You make an interesting point about types, but

> (2) in terms of macros, anything more complicated than lazy evaluation of thunks and kiddie examples is out of reach

I've seen and built tons of exotic stuff with macros. On Lisp builds a Prolog compiler if you want a specific example.

Your point about types makes me curious about LINQ. Brings to mind Julia's generated functions.

>for example .NET’s LINQ cannot be expressed in LISP, not unless you add a static type system

Doesn't that imply that Lisp, sans-static typing, is not Turing-Complete? Surely that is not the case.


It's lambda complete. Which has been famously proven to be the same as turing complete.

Also type checking is basically more or less putting restrictions on your code. If there is a language that doesn't have the syntax to compute certain things, how would restricting that language via type checking suddenly cause it to be able to compute everything and be turing complete? Makes no sense. To make a language change from not turing complete to turing complete you have to expand.. not restrict ... but expand it's computational ability. Again Type checking serves to restrict.

Type checking is essentially just pre runtime special syntax for:

  define add (x,y)
      if type(x) != 'int' or type(y) != 'int':
           throw TypeError
      return x+2
or in a lisp like functional language:

   (define (int) (x) 
      (if (equal? type(x) 'int') (x) (error 'Type Error'))

   (define add (x y))
      (+ int(x) int(y)))
It's not a big deal. I'm not sure what LINQ is, but you don't have to add a type checking system to a language to do type checking if you know what I mean.

He doesn't know what he's talking about.

Type checking is much more than that. It is not the same as throwing exceptions when types don't match; it is about proving that a type checked program will never produce type errors.

You're right. I wanted to illustrate that type checking shouldn't be a factor in the the parents context as you can use code to explicitly type check or prove it.

Of course not. You could always implement C# (static types and all) and LINQ atop Lisp, or indeed any other turing complete language.[1]

The point of static type systems isn't that they add power -- it's that they take power away. Thus empowering the programmer to reason in a restricted subset of "all the possible things".

[1] I'm ignoring the detail of there actually being a way to interface with the actual physical hardware, but I think we can assume that in any remotely practical TC language.

A Turing machine is Turing-complete.

Good luck expressing LINQ syntax, or Lisp syntax, or even Basic-80 syntax on it.

Syntax is important, it's a tool of thought.

I understood the post to which I replied to be arguing that LINQ could not be expressed in Lisp, not that Lisp doesn't have the correct syntax.

Surely you can write LINQ in any Turing-complete language.

I guess the question you are begging is: how far can Lisp-LINQ go from the syntax, semantics, and ergonomics of C#'s LINQ before you stop calling it LINQ?

Turing completeness doesn't describe expressivity of source code, as far as I understand it - it describes the ability to produce a desired outcome. So yes, you can perform the same operations LINQ allows, in Lisp or any other Turing-complete language. But can you do it while giving the programmer the same (or a similar enough) experience? From the .NET docs:

> Language-integrated query allows query expressions to benefit from the rich metadata, compile-time syntax checking, static typing and IntelliSense that was previously available only to imperative code. Language-integrated query also allows a single general purpose declarative query facility to be applied to all in-memory information, not just information from external sources.

Now, to be fair, I'm not sure you can express all that in Haskell either. And it could be reasonable to argue that a few of those benefits (e.g. intellisense) just _haven't_ been done for Lisp, not necessarily that they _can't_ be done.

Syntax is different from abstraction. The problem in doing those things in turing machines and assembly is the low level of programming. Otherwise their syntax is pretty simple: commands and operands.

> .NET’s LINQ cannot be expressed in LISP

That's an extraordinary claim. Do you have extraordinary evidence to back it up?

The advantage of LINQ in my eyes is that it allows you to build an expression tree that you can wait to evaluate until a result is actually needed.

Then the provider can decide how it wants to do it, based on the types and operations involved. A SQL DB based provider can’t safely push any work to the database if it doesn’t know the structure or types of the objects and properties in the expression tree. It would have to return whole tables and perform all evaluation in the app where types could be handled dynamically.

I think something like typed racket or clojure may support it, but many lisps would not.

> The advantage of LINQ in my eyes is that it allows you to build an expression tree that you can wait to evaluate until a result is actually needed.

Isn't that the exact definition of the "unquote" and "unquote-splicing" operators inside a quasiquote?

Honestly asking since I'm not past the "makes my head hurt" stage of scheme macros, probably doesn't help that the quasiquote macro is buggy in my minischeme sandbox.

It's been a while, and I've only really used Racket extensively, but "unquote" and "unquote-splicing" operate at macro expansion time, not run time. So unless you're using a typed lisp like Typed Racket, the macro won't have enough information to (for example) know that something is a number vs a string - the kind of distinction that's important for efficient and correct SQL query generation.

Common Lisp has a way better OO type system than C#. It has to be queried at run-time, granted, but that doesn't stop something like LINQ from working.

This may be true - I've not used Common Lisp, only Racket.

I wonder how much better Scheme with its static typing extensions fare im this regard.

IDEs? You mean emacs?

Also see Dr. Racket.

> Haskell code can be totally unreadable, using concepts which take weeks to learn – which is... not so great.

I don't think it's fair to say something is "not so great" because it takes more than a few weeks to learn. That logic leads to a sad place. You suggest using Lisp and maybe Rust, but both of those languages would probably take more than a few weeks to learn.

Also, you could just use the IO type on every function in Haskell, and thus eliminate the separation between pure and impure code. You'd only have to a type a few extra characters here and there, which isn't ideal, but is acceptable while being pragmatic.

Haskell has had a default package manager, called "cabal", for many (15+ ?) years.

I learned Haskell and then moved to OCaml and found it to be a much more practical language. Haven’t tried Lisp yet, but it’s quickly moving to the top of my list.

Oddly enough it was the opposite for me. I think I might have given up on Haskell if I hadn't learned O'Caml first. It's been 15 or so years, so I'm sort of guessing at this point, but I think the combined cognitive effort of going from typical imperative languages to Haskell and the whole algebraic data types+pattern matching thing might have broken me.

I found Haskell much more practical because it makes refactoring truly easy, but I'm very "programmer-centric". I mean 'refactoring' in the strict sense of preserving absolutely all the existing behavior, including side effects. That is extremely difficult if you can have side effects all over the place. (Strictness obviously helps a lot, but it also has downsides.)

Btw, given your experience and if you want to try Lisp seriously I'd probably recommend Racket, Typed Racket and their Turnstile experiment(?).

You’re right about the advantages of Haskell regarding side-effect strictness and refactoring. That’s a huge win. I think now that I have significant experience with OCaml it’d make going back to Haskell easier for me. That’s probably something I’ll do. But right now I’m finding OCaml has been great for me.

I’ll look into Racket and friends. Thanks

Or, you could go with Hackett, which is a Haskell-like language implemented in Racket: https://github.com/lexi-lambda/hackett

How do you find OCaml to be a more practical language? Interested in trying it out.

It holds less closely to the ideal of being a purely functional language. It gets out of my way and let’s me get things done. But I think I misspoke a bit. A big part of what has made OCaml better for me is that the tooling has been easier for me to jump into as a beginner.

>I don't see Haskell as uber language. If I want super correct code, speed and I'm okay with using a more complex language, then there is Rust – which is probably more correct than Haskell and faster, too.

But it doesn't even have Higher Kinded Types! What a disgrace. /s

I do wonder, what would OP, or any seasoned Haskell programmer, think about Rust. Would they consider it strange? Primitive? Too mainstream?

Rust is a great systems language.

When I write applications, I don't need a systems language, and Haskell is easier to use.

Also, Rust is a great imperative language, it's not as great as a functional language, and you can't reap the benefits of pure functional programming.

Rust doesn't really have a way to control for effects in functions. The type of the function doesn't define what the function can do. This I think is the biggest problem coming from haskell.

How does that work in Haskell, is it the pure/impure dichotomy regarding I/O that is encoded in the type signature? Because one could argue that Rusts ownership is a primitive version of that, but yielding most of the gains ("if it compiles, it works").

Kind of, but not really.

The combination of type classes (traits in Rust) and HKTs in Haskell mean that you can quite easily restrict a function to only be able to do a subset of what's possible in "the world".

A trivial example would be

   foo :: Logger m => Int -> m ()
where Logger is a type class (trait in Rust-speak) which implements the Monad type class (thus allowing 'imperative'-style programming), but also has a "log" method which allows the program to emit a string to the log.

The thing is that given the restrictions of parametric polymorphism that function 'foo' has to work for any Logger instance we give it, so it absolutely cannot do anything other than pure computation or invoking the "log" method. It cannot do any other side effects at all. So, it's not "pure", but it's also much more restricted than "impure".

That's a very powerful tool for modeling highly effectful systems.

Well there are more Haskell jobs than rust here, so I wouldn’t say more mainstream.

Syntax isn’t as nice as Haskell as well. A Haskell programmer fed up with purity would just move to Ocaml or F# tbh.

>Well there are more Haskell jobs than rust here, so I wouldn’t say more mainstream.

Only inside the echo bubble of Haskell.

Rust can be put to use in any place, without advertising it for it even -- and already powers widely used code across the world (and not just in Mozilla).

Haskell use mostly concerns a few financial institutions...

> Haskell use mostly concerns a few financial institutions

...who somehow have more resources to employ programmers than the organizations using Rust. Funny, that...

Do they? Or did they just had a head-start and aren't going anywhere booming further soon?

It's not like all the financial sector runs on Haskell or something -- the majority uses the usual culprits. The stories behind these organisations that I've read of is of some opinionated PM going in and making Haskell or OcamL and the like the officially supported language -- in Jane Street and Standard Chartered and the like. Which is the definition of being an outlier choice.

So, yeah, they are "more programmers" wanted for those 20+ and 30+ year languages, compared to a merely 3 year old language (Rust 1.0 was released in 2015).

But nowhere near the momentum.

Rust has been used already by Dropbox, Canonical, Cloudflare, Atlassian, Microsoft (Azure), Chef, Mozzila (of course), and tons of others in backend and client code. And that's in 3 years, and while the language is changing and core libs are still being built.

And it's silly to judge by job postings, as those places don't need to advertise for specific Rust roles (they can get regular systems programmers or use their own existing devs and have them do Rust). It's not a huge paradigm shift like with Haskell where you'll need people with that kind of training already. Besides, Rust is so new, there would be little point asking for Rust devs explicitly.

Don't forget Scala, it remains an interesting language. There's now a "native Scala" initiative...

Regardless, having the flexibility to program in the most appropriate style is nice.

Scala is interesting language, but I wish it had tighter focus. John De Goes describes pain points quite accurately in Scalapeno keynote: https://youtu.be/v8IQ-X2HkGE

Oh, but your first sentence is all-too-true at face value.

Rust is currently in the process of baking asynchronous I/O into the language, and their implementation cannot do this: http://blog.sigfpe.com/2008/12/mother-of-all-monads.html.

> My problem with Haskell is that it is not pragmatic and the reason I stopped using it.

I find it extremely pragmatic, but perhaps not in the way most people use the word. It's pragmatic about the abilities of programmers to get every single detail right all of the time, i.e. it discards that notion entirely. We will not get things right enough of the time and so the language must support and help you. This goes all the way from simple static typing to enforcing Purity of Essence, I mean, Functions.

If you don't care about managing side effects you can always just write everything in IO and slap a "pure" here or there. Haskell's actually also a pretty good imperative programming language, IME.

I find that haskell is great to learn about functional programming and programming ideas, but to get things done, I stick to imperative or OO languages. Could be that I learned CS with C/C++/Java and work with C#/C++. So I just stick to what I'm familiar with.

Agree that haskell sometimes is not that practical but it does make you a better programmer.

So I put his example into a lisp file and compiled it under SLIME/sbcl, which is how I typically develop:

    1 compiler notes:

        The function (LAMBDA (X) :IN ADD-TEXT-PADDING) called by MAP-INTO returns NULL but CHARACTER is expected                           
        See also:                                                                                                                          
          SBCL Manual, Handling of Types [:node]                                                                                             
        --> TRULY-THE SB-KERNEL:%MAP                                                                                                       
          (MAP-INTO (MAKE-SEQUENCE SB-C::RESULT-TYPE (MIN (LENGTH #:G51))) SB-C::FUN                                                         

    Compilation failed.
It's also patently false that quicklisp has no tests; it has no unit tests, but it is tested as a whole thoroughly, both the code and the set of systems it provides.

FWIW the first comment in that blog post shows that at the time of writing the post, the compiler could not detect that error.

My suspicion is the author just didn't know how to use sbcl for this. I quickly tried this and did not receive any such message.

That's fair. I would believe that error checking on lambdas has improved.

There is no magic in Haskell's strong typing. Axiom, a computer algebra system with extremely strong typing, is written in common lisp. Axiom's language is essentially a domain specific language on top of lisp.

The key difference is "impedence matching". An impedence mismatch occurs when you hook a soda straw to a firehose. There is a lot of friction and lossage. In programming terms, there is an impedence mismatch between your problem and the computer. You use a programming language to bring them together.

In assembler, you have to move your problem all the way to the machine. In Haskell, you have to move your problem all the way to the type system. In lisp you can write at the machine level (e.g. (car x) which is just a pointer fetch) or your can write at the problem leve (e.g. (integrate (sin x)))... or you can do both in the same statement (car (integrate (sin x))).

In other words, lisp allows you to craft a solution that is close to the machine and close to the problem at the same time, in the same language.

You have the choice to move as much or as little to the type system in Haskell as you deem beneficial. You do not have to "type it to the max", as is commonly offered as a stereotype.

I have worked on fairly large Common Lisp project (around 500k lines of actively managed code). That was the most pleasant experience in my professional life. The refactoring was easy, introduction of new features was simple and clear. I attribute it partially to language itself and partially to the team culture. We had a lot of tests BTW. After this I had quite bad experience with a Clojure project in a different company, so I am not dogmatic about lisp supremacy anymore. Still, Common Lisp definitely can be extremely successful in production given proper environment.

I’ve been programming lisp for a few years now (Scheme then CL) and write Clojure daily for my current job. I initially described Clojure as: the worst dialect of lisp I’ve used, but the only one I’ve been paid to write in. It’s grown a lot on me over the past few months, but I still end up missing CL every now and then at work (I think I mainly like CL’s Slime more than Clojure’s Cider).

What were some unsettling points about the Clojure project you worked on? One thing I’ve noticed is that since Clojure is a relatively new language, a lot of newer lispers will start off with it, and I imagine that companies are more likely to try Clojure than CL as a first lisp. Consequently, the Clojure code you run into on the job may be of lower quality.

Laziness is quite dangerous given existence of side effects in Clojure - this would the most common error for new developers. I missed LOOP even it has a bad reputation - nevertheless it covers almost all useful cases in practice. We were misusing AOT compilation, the behavior of that code was surprising sometimes.

But the main failure in my opinion was absence of good development patterns which leverage strengths of the language and mitigate weaknesses. Dynamic languages such as lisp absolutely need REPL as primary development mode, conventions has to be strictly enforced and consistency is much more important than in languages with rich static typing. Clojure can be and is successful in many projects. But if your team is trying write Java with parenthesis everything is hard.

> given proper environment.

I think a lot of languages will give a pleasant experience in a proper environment. The question then becomes, does a strict language like Haskell require less of a 'proper' environment?

That would be interesting to research -- though the amount of variables (team member's experience, team culture & hierarchy, company culture, financial interests, deadlines, project size, software development process, etc. etc.) would probably be hard to account for. All of them can have a very big impact on how software gets written.

I would gladly work on large scale Haskell project and make a fair comparison! I managed to write my code in Java like Haskell (using FunctionalJava) and it was very successful, so there is a big potential for Haskell in more traditional shops.

Thanks for the link to Functional Java, I had not seen it before. I added Functional Java to my programming notes for Java resources - might use it sometime when doing a Java project.

Second that. I did write production Common Lisp code and loved it. I've used various languages over the years, including Perl, C, C++, C#, JavaScript and Python (and Pascal, Basic and x86 assembly to some degree, too), and I mostly write Go code nowadays (Kubernetes-related stuff). Still, none of these languages or their environments matched the Common Lisp coding experience with plain Emacs and SLIME.

Could you describe your setup / workflow ?

Typing from my phone, so just keywords. Emacs, SLIME, single repo, pair programming, monolith applications, mercurial, PostgreSQL.

Most influential was pair programming which I personally hated but had to admit it works.

thanks a lot

For what it's worth, on a fairly recent SBCL, here's what the compiler/interpreter prints when you enter the author's function:

  ~> sbcl
  This is SBCL 1.4.7, an implementation of ANSI Common Lisp.
  More information about SBCL is available at <http://www.sbcl.org/>.
  SBCL is free software, provided as is, with absolutely no warranty.
  It is mostly in the public domain; some portions are provided under
  BSD-style licenses.  See the CREDITS and COPYING files in the
  distribution for more information.
  * (defun add-text-padding (str &key padding newline)
    "Add padding to text STR. Every line except for the first one, will be
  prefixed with PADDING spaces. If NEWLINE is non-NIL, newline character will
  be prepended to the text making it start on the next line with padding
  applied to every single line."
    (let ((str (if newline
                   (concatenate 'string (string #\Newline) str)
      (with-output-to-string (s)
        (map 'string
             (lambda (x)
               (princ x s)
               (when (char= x #\Newline)
                 (dotimes (i padding)
                   (princ #\Space s))))
  ;     (MAP 'STRING
  ;          (LAMBDA (X)
  ;            (PRINC X S)
  ;            (WHEN (CHAR= X #\Newline) (DOTIMES (I PADDING) (PRINC #\  S))))
  ;          STR)
  ; ==>
  ;             #:G51)
  ; caught WARNING:
  ;   The function (LAMBDA (X) :IN ADD-TEXT-PADDING) called by MAP-INTO returns NULL but CHARACTER is expected
  ;   See also:
  ;     The SBCL Manual, Node "Handling of Types"
  ; compilation unit finished
  ;   caught 1 WARNING condition
Thus, it identifies the exact problem the author describes at the end.

Yeah, just because the language doesn’t force you to specify types, doesn’t mean that the compiler can’t infer them for you.

What was most eye-opening for me about CL was just how helpful the compiler of a dynamically typed programming language can be.

Checking dynamic types us not the work of a compiler. Checking types at compile time is static type checking. If a type is wholly dynamic (determined at runtime via something like casting to a type specified in user input), the compiler can't check it.

(Also, type inference is orthogonal to static vs dynamic typing.)

> Checking dynamic types us not the work of a compiler. Checking types at compile time is static type checking.

By your logic, Common Lisp is both dynamically and statically typed, because compile-time type checks are done by the compiler and warnings/errors are signaled whenever proper.

More, if you declare (OPTIMIZE SPEED), the compiler is going to print a list of all places where it was unable to infer types on its own and where it expects help from the programmer in form of type declarations for individual variables.

Also: why would padding be an optional keyword parameter (defaulting to nil!) when it's required to be a non-negative integer for the function to work correctly?

Since it is required, it wants to be a required parameter.

Those interested in both should check out Hackett: https://github.com/lexi-lambda/hackett

Whilst not “finished”, what she’s done here is incredible. It’s an implementation of Haskell semantics as Racket macros.

Totally agreed... I wish I had her talent ;-) Hackett actually inspired me to start a project that goes in the opposite direction, i.e. from LISP to Haskell rather than Haskell to LISP. I'm a PL amateur, so it's been really interesting to implement a macro system on top of an existing language, without it being difficult to use or seeming like a hack.

Common Lisp as hackish vs protective is nice way to describe it.

Another way to describe it exploratory vs implementatory.

In some ways Common Lisp is like Mathematica for programming. It's a language for a computer architect to develop and explore high level concept. It's not a accident that early Javascript prototype was done in common lisp or that metaobject protocols, aspect-oriented programming, etc. were first implemented and experimented with Common Lisp.

>Dalinian: Lisp. Java. Which one sounds sexier?

>RevAaron: Definitely Lisp. Lisp conjures up images of hippy coders, drugs, sex, and rock & roll. Late nights at Berkeley, coding in Lisp fueled by LSD. Java evokes a vision of a stereotypical nerd, with no life or social skills.

One of the points the author touches is documentation. I just couldn't get why then he gave advantage to Haskell, since the documentation of Haskell libraries famously consists of a few definitions of types. So, you know that there is an abstractly named monadic function, you know it takes a foo and returns monad foo. What more guidance would you ever need?

But when it comes to CL, he has expectations:

>I’ve opened an issue on GitHub of one quite popular library, asking the maintainer to write documentation, but after 6 months it’s still not written (strange, right?).

I think that was indeed a bit biased.

That said, and not defending the author, I would add, that as an advanced beginner in Haskell, I find types to be sufficient documentation.

Very often if I look for something in Hoogle, I would be just skimming through type signatures looking for what I want. If I look for something specific I would just put the type I want into the search field.

On the contrary, f.x. with Java I tend to either google the operation I want to do using queries like "How to do blah" or read javadoc in advance to mentally index what a given library has. And there you really need to read into the text trying to figure out why a method is overloaded multiple times with some obscure extra boolean flags.

edit: grammar

I’m a Scala developer, using many libraries inspired by Haskell equivalents.

I have to disagree, types are not sufficient as documentation and this is what kept me from making the jump to Haskell actually.

But types are better than nothing, or better than outdated documentation. I can’t imagine anything worse than code written in a dynamic language and without any documentation.

This is the same discussion as with types vs tests. Types give you certain proofs for free, freeing you from writing certain types of tests. And they make property based testing easier. But the ideal is to have types + tests, types + good documentation, etc.

I enjoyed reading this article, fun read, even though I don’t agree with much of it.

My problem with Haskell: I use a subset of Haskell and when using Emacs/Intero/etc. I feel like I am productive and I enjoy myself. But, reading and understanding/modifying other people’s code is like running through mud. BTW, I wrote a Haskel book that just uses the small subset of the language that I use.

I very much enjoy working on my own projects in Haskel and using Emacs and a repl feels a lot like my 30 years experience using Lisp languages.

I also use Common Lisp, but now mostly just for paid projects, not as often for side projects. (I wrote a Common Lisp book for Springer Verlag in the 1980s, which is probably why I still get occasional CL consulting work).

>Speaking of tests, recently I discovered that Zach Beane AKA Xach, an über-level Common Lisp hacker doesn’t usually write tests. FYI, he is the author of Quicklisp, that is something like (but not quite) Cabal or Stack. Quicklisp is de-facto the only widely used library manager in Common Lisp world, and so it’s written in Common Lisp and doesn’t have any tests. It’s a wonder for me how it’s not breaking!

Why is it "a wonder"? We could, and did, write robust code, for decades before testing and TDD became a thing.

I think the bigger problem, conceptually, is the assumption that tests are necessarily something you write and not something you do. There's nothing wrong with having automated tests, but they're a supplement to hands-on testing, not a replacement for it.

Informal and manual testing was a thing back then, though. Building a large project is usually a good, nearly exhaustive test suite for a build system.

Also, if memory serves, things that I had to deal with in late 1980s and early 1990s were vastly simpler than what is normal today. Manual testing was more feasible.

But Quicklisp exists today. Are you suggesting testing is unnecessary or ineffective? If it is effective, why wouldn't Quicklisp use it?

>But Quicklisp exists today.

So? Tons of the software of the 70s and 80s exists today too. In COBOL form, it powers some of the more critical banking, government, etc, systems. In C form, it powers just about everything else.

>Are you suggesting testing is unnecessary or ineffective?

Of course it's ineffective. It can only prove the present of errors, not their absence.

>If it is effective, why wouldn't Quicklisp use it?

Because the need for testing also depends on how often you refactor and change your code, and how many bugs you're likely to put in it in the first place (based on the complexity of the domain, your skills, etc). I'd say both of those things are no issues for Quicklisp.

So, if having no tests works for Quicklist, and the program doesn't have many bugs people complain about, then that's it.

If a codebase works, is used, and doesn't seem to have bugs people complain about, we should also consult with reality when deciding if it's worth our time to write tests for it, not just some a priori ideology that they're necessary.

The older I get, the more I realize that the ultimate factor in programming productivity is how readable the language is, or how readable the typical idioms and usage of that language are. There are a lot of really awesome languages, but they are tremendously difficult to read your own code, let alone someone else’s. You can find languages that are very expressive while you’re writing the code, but that code will be read much more often than the time you spent writing it, both by yourself and likely by other people.

For this reason alone, I can really see why python has been so enormously runaway successful.

The distinction between "hacking" and "protective" languages is a good one. If you want to be balanced as a programmer, you need to deeply understand at least one language in both categories. Protective languages are wonderful for large-scale projects (true "software engineering"), but they tend to lack the whimsy that drew so many of us to programming in the first place.

A similar distinction was made by Yegge before [1], which he (unfortunately) labeled as "liberal" and "conservative". It was quite controversial [2].

Another similar idea is Fowler's software development attitude [3]. He uses the terms "enabling" and "directing" instead.

[1] https://plus.google.com/110981030061712822816/posts/KaSKeg4v...

[2] https://news.ycombinator.com/item?id=4365255

[3] https://www.martinfowler.com/bliki/SoftwareDevelopmentAttitu...

It's hard to describe, but the discourse on hn in 2012 just felt different in some weird way.

The tone of the posts seems about the same as now to me, but the top level comments do seem to be longer and with more analysis than most top level comments lately.

Can you elaborate? What languages would fall in either category (I’m assuming python would be hacking and haskell would be protective)? Does static vs dynamic typing play a role?

Not the original poster. But I would call C and C++ hacking languages. They were designed to make really low level work possible. You haven't written low level code until you had to access hardware registers by memory address :). This is such a completely different experience from really high level languages that I feel that the distinction should effectively be closer to the compiled/interpreted (JITed) divide.

Low-level languages are almost always "hacking" languages, since direct hardware access inevitably brings with it a degree of unsafety. But the reverse is not true: "hacking" does not imply "low-level," as exemplified by Python, Lisp, etc. I'd be really interested to a low-level "protective" language, if such a thing is even possible. Something like Hoon, maybe?

Any language that has a C FFI (and that his a whole lot of them) provides easy access to low-level features. The managed languages (Java especially) make this harder than it should be.

At any rate Rust and Swift are both good examples of efficient languages that can easily make use of low-level system and processor features and which offer a good C FFI.

I would argue that C++ (or at least, modern C++) aims to /enable/ one to treat it as either. It gives the programmer the power to write very low level code if they wish, but also to define type safe interfaces and high level abstractions which allow the programmer to trust the glue that binds together their blocks of code.

I'm not sure I understand his point. He wrote the function, and in under a second, he was able to easily run it to test it and got a type error.

The only difference in a static world would be the error would have showed up a few seconds earlier, before he needed to run it in the repl.

I mean, the error was definitly vague from CL, but is that really because it lacks static typing that the error is vague?

But he did get a runtime failure. Imagine this wasn't about adding a function, but changing an existing function that is being called from 23 different locations in a 100kloc codebase, and that it contains some tricky condition logic so that a bug might only be triggered in 5% of the calls. Without exhaustive unit tests, a type error potentially would not show up until all those call sites had been executed with the specific arguments required to trigger the bug. His point was that in a statically typed language, this would all be detected at compile time.

This error would be detected at compile time without it (as another commenter shows), and if you are not using the more dynamic features of lisp you really should declare the types.

Okay, sure, but that's just the classic argument for static type systems, i.e., soundness.

I thought he was trying to allure to something more.

The error was just bad, it could have been more explicit, and by now SBCL even issues a compile-time error for this problem. But you have a very valid point: the error was shown on the first testing of the function. And it is a very underappreciated feature of Common Lisp, that you can test almost every function by itself, you don't have to build and run an executable, you can, as the blog author did, test the single function and very quickly get to the problem.

That's great for functions you just wrote. It's not as great for functions you change in a large code base.

Use of compile-time type checking to make changing interfaces easier is a great trick. A great trick may mislead many to see it as the only one in the book. Then you get dumbfounded looks and a troublesome question arises: how can these large systems written in not-so-static languages (indeed, no thought about "systems") possibly grow and evolve? And yet, they do.

Maybe it's time to reconcile with reality: there is more than one way to operate; there is value in more than one paradigm.

Everything is of course possible, but at what cost?

In my experience, with large projects, you get to pick 2:

Dynamic typing Development velocity Reliability

I've seen multiple large projects grind to a near halt in their development speed, and I've seen some retain development speed, but unreliably crash after various changes.

UT coverage is expensive. System tests don't catch everything. Either you fear changes, and the system rots, or you work very slowly with coverage for everything.

Here's an observation from Benjamin Pierce, static typing guru:

> Complex definitions tend to be wrong when first written down. In fact, not only wrong but nonsensical. Most programming errors are not subtle!

Static type systems will tend to find those bugs. They're not subtle!

Running the code—just once—will also tend to find those bugs. They're not subtle!

Syntax can also help here. Complex or verbose syntax can make bugs that aren't actually subtle much harder to spot. A readable syntax, API, or even an EDSL can make errors more obvious by making incorrect code _look_ incorrect.

And of course naming things is a huge part of this, not just the raw syntax.

> Quicklisp is de-facto the only widely used library manager in Common Lisp world, and so it’s written in Common Lisp and doesn’t have any tests. It’s a wonder for me how it’s not breaking!

Quicklisp also downloads and executes code over plain HTTP with no integrity checks whatsoever.

Yes, that is the default. But you could connect through an https proxy or check PGP signatures (see http://blog.quicklisp.org/2017/09/something-to-try-out-quick...).

Log of my last 3 minutes of activity:

  This is the TXR Lisp interactive listener of TXR 197.
  Quit with :quit or Ctrl-D on empty line. Ctrl-X ? for cheatsheet.
  1> (defun add-text-padding (str nspaces : newline-p)
       (let ((padding `\n@(mkstring nspaces)`))
         (when newline-p
           (set str `\n@str`))
         (regsub #/\n/ padding str)))
  2> (add-text-padding "hello\nworld")
  ** (expr-2:1) add-text-padding: too few arguments
  3> (add-text-padding "hello\nworld" 3)
  "hello\n   world"
  4> (add-text-padding "hello\nworld" 3 t)
  "\n   hello\n   world"
Note how (mkstring n) makes a string of n spaces by default; if you want a different character, specify it as a second argument.

I've never heard of trivial-update, but TXR has something like it built-in that I independently invented called upd:

  1> (defvar a 0)
  2> (upd a (+ 2) (* 3))
  3> a
upd implicitly uses the syntax of the partial application operator op to create a pipeline of partial applications. To get such a pipeline by itself as a function, opip can be used; upd is simply the result of wanting a place-mutating operator that takes a place's value through an opip pipe and then writes the result back in.

TXR Lisp also has a well-featured command line option parsing library built-in:


One other key difference between lisp and haskell... Common Lisp has a standard definition. My code from the last century still compiles and runs as does code from all of the books.

On the other hand, haskell seems to be a struggle. I recently downloaded the book "Write yourself a scheme in 48 hours" written in 2007. Page 18 tries to "import Monad", which fails. I tried surfing for that string and only see "Control.Monad". I tried installing the lastest Haskell and the workbench. Still no luck. So, 18 pages into a book written 11 years ago and the trivial examples don't work.

As of today, there are 2 versions of the Haskell specification; "Monad" is Haskell 98, but in Haskell 2010 it's "Control.Monad".

If you want to get legacy code to run with a current version of GHC, try "ghc -XHaskell98".

That also fails.

Certainly it does but given that Control.Monad is sufficient I do not see it as a larger issue. Haskell 2010 is a different language with libraries refactored. If you wish to use an older text (pre Haskell 2010) then it would probably be best to try either Hugs98 or one of the other haskell implementations as GHC is concerned with the current standard.

The problem is the phrase "the current standard". Is Haskell going to become the new C++? C++11, C++14, C++17, etc? Is it going to live up to the phrase "I love standards. There are so many to choose from."? Is it going to become a python 2.7 vs python 3 vs python 4 game?

I understand the benefit of using the same name as you can trade on the prior mindset. But if old code won't compile then it isn't the same language.

If you're going to change a language with a standard (e.g. Haskell98), change the name. Call it Peyton or something. Otherwise there will be a Haskell20 that is refactored for dependent types that won't compile under Haskell10. Then when people claim to "know haskell" the question becomes ... which language?

The "current standard" game leads into "library hell".

I think in theory you are correct: as soon as you have 2 different "versions" of a language, you can, with sufficient creativity, construct a program that will break with the newer version.

How likely this is going to bite you in practice, though, is a different question, and one where the specifics of the language are important.

C++ is a language without a module system; instead, the preprocessor resolves #include directives by textual substitution, which is of course extremely fragile wrt. compatibility, particularly considering idiomatic "header-only libraries" of the boost variety.

Haskell doesn't use #include but has a module system, which avoids a lot of the C++ standard incompatibility issues, because you can compile each module independently with its required language standard.

Also note that there is a pragma you can put into modules to specify the version directly inline, e.g., "{-# LANGUAGE Haskell2010 #-}" or "{-# LANGUAGE Haskell98 #-}" .

I have to say, this is a very bad blog post. There are things to critisize in Common Lisp. And of course there is a valid discussion about static vs. dynamic typing. But the author gets a lot wrong and that kills the validity of his criticism. What is left, is a very negative article, which might discourage people to check out Common Lisp, which would be a pity. Even if you do not end out as a permanent Lisp programmer, learning Lisp can teach you a lot about programming.

Before I comment about some aspects of the blog, about my background: I am a professional Lisp programmer, in the recent years I used Common Lisp less (working more with Scheme-style Lisps as they are provided by my work environment), but I have implemented (and sometimes still have to maintain) reasonably sized productive applications in Common Lisp.

The blog starts with the remark that coming back to his code after months took an hour - I consider this quite a reasonable time. You might be lucky and look at a function which can be modified without the consideration for its environment, but often you have to spend quite some time before you can do changes - this has very little to do with the language.

Then the example he basis his blog post on - I think it has several issues, some already pointed out by other posters. Padding should be a required unsigned integer, not a keyword param - if you omit it (it then becomes NIL), the function will error. While current SBCL compilers even warn about the map function is called, older ones don't give a good warning and not a great error message at run time. But the main issue here is: map is the wrong function to use in this context. Map, as he used it, builds up a string from the return results of the functions it calls in the iteration, but this string is not used by the algorithm, because it writes to the string output stream s instead. Directly looping over the characters in s with "loop" would have been the better way to do it here. Interestingly, I don't think I have ever used map in my Lisp programming career so far.

The rest of the post focusses on two things, that Common Lisp doesn't have enough libraries, and static vs. dynamic typing. Funny though, that he praises Python, where the availability of libraries certainly is great, without acknowledging, that Python is way more dynamically typed than Common Lisp. The side comment "Macros are missing, but you can live without macros after all." is hand-wavingly dismissing one of the strongest features of Common Lisp. They should be used with care, but macros are what puts Common Lisp ahead of most other languages - you can, inside your project, make careful adjustments and extensions to the language itself.

So, yes, the amount of libraries has some point, but attacking the one guy, who did most to give all Common Lispers easy access to lots of libraries, for "not writing tests" is not strengthening the argument. And while the Common Lisp community indeed could use more active contributers and more libraries, blog posts like this rather deter people. It would have been more productive to call for contributers. And from my own practical experience: yes libraries are very valueable and sometimes essential to start a project. But once you become a maintainer of production software, they can be also quite a liability, as you depend on a piece of software you don't maintain.

This leaves the critique of the lack of static typing in Common Lisp. First of all, yes, Common Lisp is not a statically typed language. If that is a blocker, use a static typed language, but then don't praise Python. There are many reasons which speak for static typing - and that is also a reason I have added Go to the programming languages I use. A proper static type checker can be quite a help developing. Interestingly in this context, Go uses a very limited type-inferencing, so that usually the type declaration of the function parameters is enough so that you don't have to explicitly type local variables. Which brings us back to what Common Lisp offers, especially SBCL: optional static typing. You can declare the type of any function parameter and the return results of functions. You can declare the type of any local variable. Depending on your "optimize" settings (speed/safety), SBCL will insert the necessary type checks and use the type information in its type inferencing engine, and create type errors, wherever it can detect them. With fully-typed code, SBCL can generate code which matches and occasionally even exceeds the output of gcc. So, Common Lisp has a lot to offer, which the author had not tapped into.

> what Common Lisp offers, especially SBCL: optional static typing

Common Lisp offers optionally unsound static typing.

(SBCL doesn't change that, that's why it inserts runtime checks.)

I am not sure what you mean by "unsound static typing", care to explain?

SBCL does offer optional static typing as it goes beyond Common Lisp and can perform static type checking across compilation units.

If you have a lisp file like:

  (defun foo (x) (declare (fixnum x)) (* x 2))
  (defun bar () (foo 3.5))
Where bar has a static type error by calling foo with a float value, compiling the file with SBCL will give you the expected static type warning:

  ; file: foo.lisp
  ; in: DEFUN BAR
  ;     (FOO 3.5)
  ; note: deleting unreachable code
  ; caught WARNING:
  ;   Asserted type FIXNUM conflicts with derived type
  ;   See also:
  ;     The SBCL Manual, Node "Handling of Types"

A sound type system would have guaranteed that no program execution ever calls your function foo with the wrong type of argument. SBCL doesn't offer any such guarantee.

If some part of your program extracts values from a heterogeneous list and calls foo on one of them, SBCL will not be able to statically guarantee that it is a FIXNUM at runtime. It can't reject the program either, that would require seriously subsetting the language.

SBCL will issue a compile warning, that it cannot guarantee that you call the functions only with FIXNUMS.

Whoa. Will SBCL emit false-positive warnings all the time when you dare to use lists in a Lisp, or does it at least try hard to infer specialized list types, going above what's expressible using the standard LIST type?

No it doesn't emit false positive warnings when using lists. That is FUD. But if a function explicitly requires fixnums, and you pass something of type t, it might warn (depending on the optimize settings).

You didn't answer the question if SBCL infers specialized list types. I'll assume the answer is "no".

In that case, everything extracted from a list is T and you'd have to get that warning you mentioned above.

Such warnings would very likely be false-positive, unless Common Lisp programs are buggy as hell most of the time—assuming that would be the real FUD, no? Therefore, the non-FUD conclusion is to expect a high rate of false-positive warnings in typical list-using programs.

TBH, I suspect those false-positive warnings you mentioned aren't bona fide warnings, but some kind optimizer stream-of-consciousness log stream.

List elements are of type t, and of course acceptable to pass to all functions which take t as an argument, so no false-positive warnings.

Are you saying that the happy path of SBCL static typing—given you also want to use lists, and also not be inundated with spurious warnings—is to declare everything to be of type T?

No, I am not. Perhaps you read up a bit about Common Lisp and SBCL before we continue the discussion.

  (defun sum (lst)
    (let ((acc 0))
      (dolist (x lst)
        (setq acc (+ x acc)))

  (sum '("a" "b" "c"))
gives me a runtime error. Does sbcl support parametric polymorphism?

The problem as I see it is the type for lst.

Common Lisp has a (CONS α β) type for cons cells with member types α and β, but its LIST type is defined as (OR (CONS T T) NULL), basically more of a binary tree type, where T is the Common Lisp "any" union of all types.

A useable (LIST α) type would be something like μβ.(OR (CONS α β) NULL), but that's not expressible within standard Common Lisp. I expect it to very hard to infer too, especially with CONS cells being mutable.


I was introduced to Lisp in my CS320 Programming Languages class and fell in love.

The sad thing is I've never really used it in any "productive" way. Never for a freelance gig, and absolutely never in my job as a web developer. That being said, I have learned to appreciate languages that I don't necessarily use to directly make money.

I've learned a lot of programming concepts from Lisp that will be with me forever. Thinking about problems in a functional way has been beneficial to a lot of my work.

You've probably heard of http://lfe.io/, I imagine

> The fact is difficult to argue with, nil is definitely not a character. But why the heck do I get this? Can you tell?

Look at the stack trace first, what funtion signals that.

I think that haskell is fine when your data structures are simple, for example tuples of length < 10. When you need more general structures, like lists of lists of element of any type, then Lisp is a better fit. For example, I don't know any computer algebra system (cas) designed in Haskell, but have the free CAS Maxima programmed in Lisp.

When do you have lists of lists of elements of any type?

What can you even do given such a value?

You could use XML data structures in which some of the elements are programs that you can execute (program is data), other are graphic, other are links to more XML data and so on. And your program is able to process that information.

Then what you have is not "anything". You have an expression problem:

Either each element has some "handler" that does the right thing for that data type.

Or you have a set of cases that each data can be, and you handle them all.

Neither is just "any type". And static types are very suitable to describe either.

The handler being a typeclass in Haskell vs the enumerated cases being pattern matching?

A type-class or just a callback type, vs pattern-matching, yeah.

Are there anyone can compare Haskell vs Ocaml vs Sml vs Scala vs Nim vs Shen vs Rust?

All pretty different languages.

Haskell and OCaml are both functional programming languages (well OCaml has OO support as well) that have a REPL, interpreter, and native code compiler. They have decently sized communities considering that they are mainly academic languages although Haskell is starting to see industry use at some big name companies like Facebook. OCaml is maintained by the French national lab INRIA and is extremely popular at the fintech firm Jane Street. OCaml inspired much of the syntax of F# (.NET functional programming language). Haskell takes functional programming to the extreme and has a lot of academic jargon (Monads) that can turn off beginners. Scala is like OCaml in that it has support for both OO & FP, but it runs on the JVM and you can use Java libraries. I think both Haskell and Scala have excellent concurrency support. I forget which one uses STM. OCaml has been supposed to be getting multi-core functionality for a long time I think. These three languages are all pretty standard business languages that you could build a business out of. The fact that Scala can use the JVM is super super nice as a lot of Haskell and OCaml libraries are not something I would trust in production. FPComplete is a consultancy for helping firms use Haskell commercially.

Nim is a language with Python like syntax that transpiles (or compiles) its source code to C or JavaScript and then uses those compilers to get really fast code and an executable to distribute. It is still pre 1.0, but it has the option to use or not use garbage collection. A pretty cool language. It can be used for fairly high level projects where Python might normally be used, all the way to OS and video game work with some effort. Rust is mainly meant to replace C++ (closer to D than Nim, but they do overlap some). Rust uses the LLVM compiler and is being used at Mozilla. It currently has had a lot of hype for awhile. It is supposed to be a really safe language. The syntax is a mix of imperative, OO, and FP in my opinion. It is quite fast. Lastly Shen is a research language from Mark Tarver (wrote the bipolar lisp programmer essay). Last time I looked it focused on theorem proving and ran on top of another lisp (can't remember, but maybe Racket or Common Lisp). It used to have a pretty odd license, but I think it's been fixed for awhile. It has built in support for prolog functionality like some other lisps which is cool. He also has taken time to write some libraries which let you use Tk for GUI, which is also neat. I wish I had time to check it out.

Lisps are kind of like the ultimate dynamic languages, while Haskell is like the ultimate static language. Lisp has more flexibility, but the Haskell compiler will catch a lot of bugs.

If you want to use Tk as a GUI, you can do that with any Common Lisp via LTk (https://www.cliki.net/ltk).

Thanks for your detailed explain. I forgot the wonderful D language.

Can you give Some compare on their type system detail? Thanks.

D is more like an improved C++ with GC.

The type system is quite similar to C++'s one, just that the language has better support for meta-programming, modules, packages and explicit notion of unsafe code (system in D speak).

Code generation at compile time is done via compile time code execution, templates or plain replacement.

It has a GC, but like all GC enabled systems programming languages, has mechanisms to control its execution, forbid its use in performance critical sections (@nogc attribute) or just manually allocate memory via stack, globals or plain OS calls.

why do you think haskell is the ultimate static language? surely it has tough competitors in f#, sml, and idris, all of which have big features haskell doesn’t have.

I'm not aware of any language that can go up against Haskell in the bleeding edge category. I'm sure there are plenty of features such as mutable state in functions that were intentionally left out. I thought Idris was supposed to be Haskell on JVM? Let me know which features it is lacking from the other two. I'm not familiar with SML. Overall I guess it doesn't really matter.

The JVM Haskell is called Eta (I also heard of another similar project, Frege). Idris is Haskell-ish, but it has Dependent Types - the "new" typing hotness - which offer typing even more advanced/fine-grained than Haskell[1]. AFAIK using types to create something resembling a proof of correctness isn't uncommon in Idris. Or stuff like a function that a takes a number and a proof that the number has some property, say even-ness. So I'd say Idris is even more researchy/bleeding-edge than Haskell.

[1] Some of these features can be emulated in Haskell, but AFAIK it's kinda clumsy at the moment. There's an ongoing initiative called DependentHaskell that aims to improve that.

Thanks for the explanation and correction!

Think a better definition of Idris would be Haskell with theorem proving and not forcing lazy everything.

It's compiled with C compiler or to JavaScript currently, no JVM implementation. You might be confused with Scala.

extremely popular at the fintech firm Jane Street

I don’t know if I’d describe JS as a fintech. Their technology is not their product that you can buy, it exists to power their real business, which is prop trading.

As far as Jane Street develops or commissions software, it is fintech software. Who cares if it isn't their end product?

In that case every bank, insurance company, etc in the world is ”fintech” and the term is meaningless

True to a degree, but most would say there is a difference between standard bank "x" and a company that has billions in trades going through it. A fine lined distinction though.

Most banks don't have lots of people writing complex algorithms for their day to day job.

Most banks don't have complex algorithms (the sort that some would consider OCaml or Haskell a good fit for) in general, they only have complex software because they have complex business and regulatory requirements, complex and incoherent existing systems to work with, cost constraints and organizational issues.

I'd agree with that. I don't work in banking, but definitely understand how various regulations can complicate things.

I agree. I was in technology for in-house use in both commercial and investment banks in the 90s. I definitely refer to my work back then as fintech as much as my more recent work on algo trading systems for direct client use.

What about the Static Lisp : Shen language?

> Lisp and Haskell are arguably the most peculiar languages out there, at least they are from my experience.

Clearly the author has yet to encounter HOtMEfSPRIbNG!

Greedily evaluating both Haskell and OCaml for systems development work rn. Haskell seems a lot cleaner, perhaps too purely clean, and OCaml seems to have some rough edges where it can’t/doesn’t infer types without awkward syntax. Lazy haskell has STM, monads, parallelism, concurrency and a huge community. Greedy OCaml can do impurity and pseudo-procedural code easier but lacks much of what Haskell has. Thoughts?

What does "systems development" work quite entail?

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact