Hacker News new | comments | ask | show | jobs | submit login
Why Clojure? (2010) (thecleancoder.blogspot.com)
223 points by edem on Dec 30, 2016 | hide | past | web | favorite | 279 comments



I tried and really wanted to like Clojure, but I hated the error messages. I love how it's a lisp, I love how you write programs in it, I love the REPL, it all just feels nice to me.... until I miskey something or make some other error. Then the compiler seems to hate me personally.

"Here's a haystack where the error /might/ be, have fun finding the needle dipshit." Maybe I'm just spoiled with Elm, Rust, and Elixir's error messages, but the last time I tried Clojure (more than a year ago) I just hit a wall when I tried to make a toy app, as the Clojure compiler seems to hate me even more than C's compiler does.

Has this situation improved? If so, I'd love to take another stab at it.


Clojure has added a new library called 'spec' and added alot of these specifications to the core of the language. These should help a lot with error messages, specially when using core libraries.

See, http://clojure.org/about/spec

Its still not great, but improvments are happening.


Thanks, I don't remember that being there the last time I was into Clojure, and I'll have to give that a try.


I always use something like clj-stacktrace[1] to pretty print error messages and I've never had any issues figuring out what the problem was.

Most of the time you're working from the REPL inside your editor anyways so the iteration loop is much, much better than any other language's. Even without clj-stacktrace I'd still prefer Clojure development to anything else.

Rebooting a program and losing state is always worse than error messages taking a bit longer to understand at first.

[1]: https://github.com/mmcgrana/clj-stacktrace


The example shows what's wrong:

1) bad error message. There error is explained in terms of some other machine, but not on the language level.

2) a zillion lines of useless stacktrace information: I don't want to debug the REPL, but the wrong call. The stacktrace also lacks argument information.

Generally I would hope for a 'live' backtrace.

> Most of the time you're working from the REPL inside your editor anyways so the iteration loop is much, much better than any other language's.

Any other language, but not compared to those who have a similar or better REPL or debugger. Which is a zillion.

> Rebooting a program and losing state is always worse than error messages taking a bit longer to understand at first.

Not sure how that is related...


Rebooting a program and losing state is always worse than error messages taking a bit longer to understand at first.

Not sure if it's _always_ worse. I've worked in imperative & functional languages, and the quality of error messages is key to solving bugs and understanding a sufficiently complex system, regardless of what language it's coded in.


Maybe not always, but its important enough to consider.

You often have much more information at runtime than at compile time. Sure I can run the compiler and scan its errors but I feel its much more productive to poke around the program as its running. Even when working in C++/whatnot I learn more about programs by running them in the debugger than reading the code (reading UE4's source code will be weeks of head scratching because its so huge - 2 days of stepping into its boot process and you know and _understand_ most of its architecture).

I'd argue that the hardest bugs to find are the ones your type-system is completely helpless against. Few languages protect against null pointers and every time you have a cast you're breaking out of the type system and its guarantees. Dependent types aren't frequent either so you're now augmenting the type-system with asserts and guards and whatnot.

I find that in C++/Java/C# complex systems are the norm because everything is built out of mutable blocks and misconceptions about what makes programs fast.

Every project I did in Clojure was a fraction of the complexity it would've had in imperative languages. You're reducing complexity _so much_ that the "type-system is good for complex bugs" argument almost vanishes.


... I learn more about programs by running them in the debugger than reading the code

Agreed - in Ruby, most of my development was driven by a REPL & debugger, and tests. That's an example of a primarily imperative language, though it borrows some ideas from Lisp. But there, again, understanding error messages was key.


this seems a jillion times better than a standard stacktrace. why isn't it in built into the core?


The core just uses whatever the VM under it has for error handling. Clojure is a hosted language and one of the best decisions it made was not to hide the VM its sitting on (either the JVM, CLR or JS - all have slightly different exception handling).

I see it more as a development library and don't ship it to production.

There's also cljs-devtools and dirac to enhance the chrome devtools with ClojureScript support. Once you try them you're not going back :)


I guess. But core could spit back a verbatim VM stacktrace and and Clojure specific/pretty printed version.


It's especially disappointing coming from Common Lisp and expecting a debugger with restarts etc on error. Just getting a stack trace feels so primitive.


That wouldn't be possible on the JVM. CL's condition system is unique to CL AFAIK. Even Emacs' cl module doesn't have this condition system. You really need support from the ground up to have it.

Clojure is a hosted language, CL isn't. There are loads of tradeoffs from that design decision alone. Saying it feels primitive for that difference only is jumping to conclusions rather quickly! It feels like you're comparing decades of CL experience with days of Clojure experience :)


> Clojure is a hosted language, CL isn't.

There is nothing in the CL standard which says a single word about it. The idea of the standard is that it is possible to have different implementations: native on some CISC/RISC/Lisp CPU, on top of C, on top of a virtual machine, on top of the JVM, using the LLWM, etc etc...

> That wouldn't be possible on the JVM. CL's condition system is unique to CL AFAIK.

Let's run ABCL on the JVM, on my ARM-based ODROID:

    Armed Bear Common Lisp 1.3.3
    Java 1.8.0-ea Oracle Corporation
    Java HotSpot(TM) Server VM
    Low-level initialization completed in 1.242 seconds.
    Startup completed in 6.577 seconds.
    Type ":help" for a list of available commands.
    CL-USER(1): (handler-case (and (evenp 2)
                                   (/ 3 0)
                                   (oddp 1))
                   (error (c)
                     (princ c)
                     (values)))
    Arithmetic error DIVISION-BY-ZERO signalled.
oops, it supports the CL condition system...


> Clojure is a hosted language, CL isn't.

CL works on the JVM too (ABCL). There is a "java" layer where you can introspect classes and create them dynamically, etc. The syntax is not as terse as Clojure one's. There is one thing that I have had difficulties to do in Clojure, for example, is when you want to abstract over types:

    (fn [type] (.. type (staticFunction)))
As far as I know, the above doesn't work because "type" is expected to be known at compile-time.


Didn't know about ABCL :)

You have defprotocol to dispatch based on types in clojure. There's deftype to dynamically create classes and Java has reflection to begin with. You can call .getClass at runtime (which is what the type function does).


Thanks


It turns out that it is, in fact, at least mostly possible to implement a condition type system in Clojure. See https://www.youtube.com/watch?v=zp0OEDcAro0

I did a little bit of googling for Clojure libraries that do this: https://github.com/zcaudate/ribol and https://github.com/clojureman/special

I don't actually recommend any of these and haven't tried them. In particular I think using a non-standard library for your error handling, which is already probably relatively poorly tested, is probably a bad idea.

I just wanted to point out that it's possible. I also think I remember something in a Rich Hickey talk (maybe "Clojure for Lisp Programmers"?) where someone in the audience asks about condition systems, and he responds that Clojure doesn't have one, but that it should be possible to build it.


Thanks for pointing this out! I agree about being very careful in choosing to use such libraries!

One concern I have is how well they integrate with the native exception system of the VM and what stopping on an error waiting for programmer input would mean in a lot of contexts.

For example, stopping in a request handler while the programmer fixes the code will most likely trigger a timeout on the other end completely killing any advantage the condition system had in the first place.

You've effectively bloated your programs with libraries you're not even taking advantage of :)


In common lisp, during development, you can fix a surprising number of mistakes before the browser times out: especially mistakes involving things like typos.

The other advantage of the condition/restart system is that it makes your libraries more reusable: rather than having either to choose one error handling strategy or to construct a system for choosing an error handling strategy, the language itself provides a construct for making several error handling strategies available that higher-level code can choose between in a straightforward manner.


That difference does feel primitive, though; it's like a C-style edit->compile->run cycle vs talking to a listener. Like you said, tradeoffs, and Clojure definitely has a lot of other stuff going for it, but its version of interactive development is just a lot less interactive.


That's ignoring the REPL :)

You rarely reboot a program under development in Clojure - you rather connect your editor to its network REPL and live code everything from there.

This makes not having a condition system a minor inconvenience at best.


Until you get an error at the REPL.


Which is fixed interactively at the same REPL. You're just restarting a computation instead of resuming it. Rather than recompiling/rebooting the entire program and losing its state in the process.

However, this style of development has taught me to write code that's resilient to such errors in the first place (as it, errors wont leave the program in an invalid state), and the end result is that I feel much more confident putting code in production and restarting computations at the REPL is effectively free and feels the same as using the condition system for the most part.


That's my point--you're restarting the 'computation' instead of resuming it, compared to restarting the program. It's the same benefits you acknowledge, just, moreso.


It depends on the context. Resuming a request handler after the client timed out is no good. Even in CL in those cases you're fixing at the REPL and having the client redo the request.

Or you're developing your function in a sandbox and restarting the computation has no difference to resuming it in the first place.


Would people who downvote care to explain WHY they're downvoting? Downvoting is a terrible way to disagree with a comment.


I'm in the same boat. Unless you already have Java errors burned into your brain, it is very hard to decipher them in terms of Clojure code.


My understanding is that the new spec system makes the error messages much better.


My "functional programming epiphany" came in a talk by Martin Odersky, who remarked that imperative programming is like thinking in terms of time, whereas functional programming is like thinking in terms of space. Don't think about the sequence of how to achieve things, but the building blocks needed to do so. That nailed it and made me a Scala convert ever since.


Hickey himself based clojure on a different interpretation of time (place vs value). He talked about it a a conf long ago.

Going away from falsely linear time and shared mutation is one strength. You can rely on what has been assumed much stronger. Very good for concurrency.


That talk was "The Value of Values," which I enjoyed as well.

Links for anyone interested:

https://www.infoq.com/presentations/Value-Values (1 hour version)

https://www.youtube.com/watch?v=-6BsiVyC1kM (1/2 hour version)


That's a great talk and has had a lot of influence. It's specifically referenced in Project Valhalla's proposal: http://openjdk.java.net/jeps/169


Haa good catch, that's a strong sign if Oracle/Java is using it as an inspiration.


The epochal time model of Clojure was an enlightenment moment for me when I was learning Clojure. I now sorely miss it when I have to work in any other language :)


The book "Learn you a Haskell for great good!", Which is in my opinion an essential read for someone wanting to get into functional programming, describes it as "imperitive is when you tell the computer a sequence of operations to get a result. FP Is when you tell the computer what thingsare.

I think it should basically be required reading for any programmer, it's a very easy to follow look into the functional paradigm. It's also free to read online! http://learnyouahaskell.com


I own a copy and can't help but disagree. It goes over a few things like list comprehensions, types, etc, but I couldn't even stumble through writing a basic Haskell program when finished.I'm looking forward to Manning's Grokking Functional Programming if it ever comes out. The Haskell Book is also popular these days.


Really? It gave me a great grasp of the language and syntax and I was able to write some basic things immediately after, and anything I didn't know how to do (like communicate over a socket, etc.) I could google search.


Highly recommend http://Haskellbook.com


Why not think both in terms of space and time?



I mean, thats what imperative programming ends up being to some extent. But the idea is that the less things you can think about, the better.


Clojure core.async!


Best example I heard was in an F# talk. The guy used a bar tending analogy:

FP => I'll have a Sazerac

Imperative => Excuse me sir, could you take some ice, add rye whiskey, add bitters, add absinthe, shake, strain into a glass, and add a lemon garnish before bringing it to me


Well, isn't this nice - outsourcing the knowledge what makes Sazerac, and how to make it to somebody else, and just declaring that you want it?

Would you mind actually making Sazerac in your FP "analogy" as well?


I like Clojure, but stopped using it due to not having a good way to define DTOs at a service level (I prefer noisy statically typed languages apparently). Best guess.

  (-> {}
    ice
    (rye :2-fingers)
    bitters
    absinthe
    shake
    strain
    garnish)
Now I think that strain flipped the returned type from drink to a glass with the drink.

All this shows is that OO and FP are duals [1]. I don't claim to get FP perfectly, but my moment of zen was realizing this.

1 - http://wiki.c2.com/?ClosuresAndObjectsAreEquivalent


Thank you, I had not seen that c2 page. Many in the JavaScript community have fervent arguments for/against closures/objects (which I do not share). The educated debate in that link is a quality resource on the subject.


forgot the sugar!


Haha...didn't think so many people would respond. There is a Lisp example below. F# is similar, but with pipes and arrows.


Imperative (especially Java-style OO-imperative) programming will just about always win over non-OO declarative programming in an analogy like that (i.e. a Simulation of a real world process) by nature of them being focused on step-by-step "world manipulation" (and object interaction in case of OO).

The value for FP comes from proper abstraction over these processes in functional terms, at which point they can be trivially implemented (few bugs, few iteration cycles to get right). This can probably be done for every problem space, question is at which the abstraction costs outweight the gain. Considering that FP becomes more and more mainstream it's probably more viable than thought in the past, still I imagine system-driven games or complex real-world simulations, with lots of side effects, would lose more from FP than they'd gain.


> Imperative/OO [...] programming will just about always win over non-OO declarative programming in an analogy like that (i.e. a Simulation of a real world process)

Some time ago I thought that, too, but I'm no longer convinced. Directly mutating values (OO style) feels more natural at first, but then you have trouble with side effects and order of execution matters more than it should, you start keeping snapshots of the whole state just to get a consistent world state during computation, otherwise this whole mess produces a whole class of bugs on its own. These problem drive you more and more into FP direction, and the FP style definitely does have its merits in this regard.

I think the following article articulates this very well:

"A Worst Case for Functional Programming?"

http://prog21.dadgum.com/189.html


FP is more like: drink(sazerac(garnish(strain(shake(absinthe(bitters(whiskey(ice(glass)))))))))

Ultimately it's the same result? The difference is when you can reuse and compose functions.


Exactly.

FP you start with the methods and just keep composing. With OO you start with classes and objects.


In OOP you compose nicely as well. It is also easier to understand because it is just conversation.

Me(Drinker)->drink( Sazerac(Cocktail) ->garnish() ->strain() ->shake() ->absinthe() ->bitters() ->whiskey() ->ice() ->glass() )


I would probably use threading to achieve comparable readability.

  (def sazerac 
    (-> (mix :ice :whiskey :bitters :absinthe)
        shake
        strain
        garnish))

  (drink :me sazerac)
I never had a Sazerac. It sounds like a nice drink.


I would create a Cocktail data structure and an instance of Monad for it.

    sazerac = do
        add ice
        add ryeWhisky
        add bitters
        add absinthe
        shake
        strainInto glass
        add lemonGarnish

    main = serve $ makeCocktail sazerac


Haskell really is a pretty nice imperative language sometimes.


That isn't composition to me but sequencing. Function composition yields a function, not the result of applying the functions.

Either you have an object with all of garnish(), strain() and whatnot on it, or each method returns the object to handle the next step in the chain. Either methods don't scale at all without modifying existing code.

The real difference is that function composition gives you a reusable function you can further compose while objects keep piling up methods until you're left with god objects or indirection hell.


And people have the cheek to complain about parens in Lisp...


Its ironic because foo() and (foo) have the same number of characters; but the later is actually data you can manipulate directly.

Reminds me of the blub paradox in beating the averages[1].

[1]: http://paulgraham.com/avg.html


I once had a comment where I translated a clojure function into it's equivalent syntax in python. It was still pretty hard to read. I think its about how lisp uses function composition for everything makes code hard to parse until you get good at it. Even with the standard practice is to hide it with macros and many small functions.

https://news.ycombinator.com/item?id=11174946#11177360


I like more forward-building, pipe style FP. Reading outwards on the statement level still makes things hard for me.


fluent interfaces from OOP are different than composition because they mutate-and-return the object reference; if the instance is aliased anywhere else in the program there is spooky-action-at-a-distance. Composition is about programming with values without mutation. As far as syntax goes it is a trivial macro to translate `x.f().g()` into `g(f(x))` (clojure actually provides it : http://clojure.org/guides/threading_macros)


> they mutate-and-return....

Not necessarily; you can easily write instance methods which merely copy the existing object.


If you want to create a bar food (Finger Food object, not Cocktail), which also could use a garnish() method, which inherits from which? Or should both objects have a father, Garnishable object to inherit from? It's clear to me that object composition is less flexible than functional composition.


You can have trait/interface. But since garnish will be doing different thing it is OK to have two implementation. FP looks nice in theoretical examples in real world not so much.

In FP you will end with garnishFingerFood and garnishCocktail because you need to encode somewhere a specifics of garnish action. In OOP you will have garnish methods on Coctail and FingerFood and specifics and related knowledge how you need to perform garnish will be on object itself.

OOP is really powerful concept but failing in languages that have shit implementation. Java, C++ forsake OOP principles for "performance" or are made by people that do not understand concepts (Python, PHP).


From the specific example you're presenting, I don't see how you couldn't just have a Garnishable typeclass, that implements garnish differently for the FingerFood or Cocktail types.

The comment that FP isn't nice in the real world is pure baloney. For lots of "real world" IO-bound types of problems there is nothing better suited than a functional programming language with powerful abstractions. Things like monads let you write code in an imperative style without losing any of the benefits of writing in the functional paradigm.


OOP doesn't actually have the market cornered on polymorphism. As my sibling replies show, FP languages have long known how to achieve it.


> In FP you will end with garnishFingerFood and garnishCocktail because you need to encode somewhere a specifics of garnish action. In OOP you will have garnish methods on Coctail and FingerFood and specifics and related knowledge how you need to perform garnish will be on object itself.

You're complecting. In the real world of Clojure, you could simply define a protocol and provide different implementations of "garnish".


You just have the method as an aspect that you import into the class ;)


Generic functions: functional yet dynamically dispatched.


Generics aren't dynamically dispatched. Java has type erasure and C# emits variants for every instance.


No, I was talking about (Common) Lisp's generic functions (http://clhs.lisp.se/Body/m_defgen.htm). I realize I did not give enough context. Define a generic function:

    (defgeneric garnish (what with-what))
Specialize it on one or multiple arguments:

    (defmethod garnish ((c cocktail) (f fruit)) ...)
    (defmethod garnish ((s sandwich) (h ham)) ...)
    ...
But you can use it like a function:

    (let ((currified (rcurry #'garnish :curry)))
      (map 'list currified items))


Ah my bad! :)

These look very much like Clojure's protocols and multi-methods.


Yes, more like multi-methods, except for standard qualifiers (:around, :before, ...) and user-defined ones. OTOH, Clojure allows you to define a custom dispatch function.


Ah yes, Clojure has no first-class support for aspects. Its easy enough to monkey patch definitions for the rare case when its needed.

Most of the time however I try to avoid aspects as they introduce hidden behaviour in existing functions, which can be hard to reason about at scale, especially with more than a few developers.


I use them all the time. It's a great way to separate responsibilities.


I prefer function composition for that when possible :)

Aspects I feel are more useful when you want to modify the behaviour of existing code you don't own.

Maybe if I had easy access to them I'd find more use cases :)


The example is just about abstraction.

The imperative equivalent would be:

"Give me a Sazerac!"

(imperative, hehe)


"It is imperative that you give me a Sazerac!"


Some time ago at University, we had to develop a puzzle solver in FP. In the report's introduction, I wrote something like "In this project, we must code imperatively in the functional programming language OCaml". The joke was well received.


First of all, that's not how you make a Sazerac. You rinse the glass with absinthe and toss the absinthe, you don't add it to the mix. There's also an ordering dependence: the rye and bitters can be mixed in either order, but adding ice, shaking, and straining should happen in that order with nothing in between, because the longer the warm ingredients are in contact with the ice, the more you're watering down the drink. I'm not going to go so far as to say the lemon garnish is wrong, but an orange peel rubbed on the rim and then garnished is better IMHO.

Second, that's not functional programming, that's calling a library function.


I think a better analogy would be:

FP => I'll have a Sazerac

Imperative => Serve me a Sazerac

Imperative programming isn't devoid of abstractions; it just has different ones.


I've used Clojure in a few cases and I totally adore the simplicity of its Lisp syntax as opposed to the baroquesque abomination that is Scala. However, lack of strong typing is sorely felt. I 've done a little playground-style OCaml coding and the feeling you get with OCaml is that once your program compiles, it most likely also runs correctly. Is a Lisp language with strong typing for the Java ecosystem too much to ask? Apparently it is or else we would have had it by now.


> I 've done a little playground-style OCaml coding and the feeling you get with OCaml is that once your program compiles, it most likely also runs correctly.

This is only a feeling. Without carefully testing the code to exercise its cases, all you know is that they are properly typed.

For instance, suppose we write a complicated function (or group of functions) which goes through a block of intermediate code (output of a compiler) and assigns registers to all the temporaries, introducing memory spills in situations where more variables are live than the available registers.

We might have a feeling that because this code compiles, it must be free of problems such as accidentally assigning the same register to two variables which have overlapping lifetimes.

That feeling is poorly supported by reality; it is the "safyness" of static typing.

Untested code is garbage. And thorough testing is difficult to impossible, so there is a bit of garbage in almost all software, unfortunately.

Statically type checked code has all of its code paths effectively tested by the compiler, but those tests have only the limited point of view of trying to show that the program contains a trivial type mismatch, a close cousin of the syntax error. These "tests" are not actually feeding values into the code and trying to make it fail or behave incorrectly with respect to its specification.


I mostly agree. In an increasingly distributed, decoupled, non-monolithic, service-driven world, where service interfaces are dynamically-typed and applications are data-driven, static typing doesn't add much value. And the sophisticated type systems, that can provide better guarantees than relatively primitive type systems like Java, are quite difficult to understand and use correctly. I expect that we'll see more and more type modeling bugs as static type systems become more feature-rich and complicated.

Static typing is extraordinarily useful in large company settings, where you may own a library that is used across the company, and you can quite easily understand how your library is used in other codebases and can confidently make cross-cutting emergent patches.

I'm pretty optimistic in the direction that Clojure is heading with clojure.spec. The ideal that I hope it reaches is "inductive" type safety, where you can prove your program is typesafe with a certain probability. And the tradeoffs you get by relaxing the deductive properties and timeliness of static typing are vastly simpler expressiveness, composability, testing, etc.


> Statically type checked code has all of its code paths effectively tested by the compiler

The type checker just ensures that a program is well-typed (i.e. free of type errors).

The better the type system the more program properties can be encoded in it and the more errors it can catch (up to the point of proving correctness of your program).

But you are right that type checking alone is no substitute for testing. They are orthogonal concepts and both should be employed to ensure that your programs behave correctly.

One nice thing of statically typed languages is that they can automatically generate some tests for you, like e.g. the QuickCheck library for Erlang and Haskell.


They are not orthogonal at all. Both verify your code against invariants. While the typing system tries to prove an invariant holds, a test tried to prove that it does not hold.

That means that false positives are the problem for typing systems, while false negatives are the problem for tests. But that's the biggest difference you will find.

One does even replace the other.


Clojure has quickcheck as well: https://github.com/clojure/test.check


Good point. Of course you have to specify the type of the values that you want to test in the test itself then, so you have to specify some types one way or the other.


You can feed those types from clojure.spec now :)


I've actually felt this same way for a while now. This past year or so I've worked hard (imo) to "get" the benefits of strong typing, but can't shake the feeling that the tradeoffs just aren't worth it. I was hoping the verbosity, complexity, etc (strong types), would give me more confidence and security in my code. It always seems like a false sense of security though and I'm currnently on the fence as to whether or not it's worth it.

Is there any studies done showing that strong typing is actually a disadvantage? I've seen some stating the opposite, but nothing really stood out to me. Sorry for the long rant, but at this point I can't find anything empirically stating one is better than the other.


Static typing (and other static checks) will reduce the round trips between editing and running. Some of your typos that are not syntax errors get caught and reported to you. Of course, you were going to test that code anyway. But you avoided a bit of wasted time. However, this advantage can be eroded by the trade offs.

Interesting experience recently: I added static checks to a Lisp dialect which report warnings for all occurrences of unbound variables and functions. (Not type checking, but a very basic static check). According to static proponents, bugs of this type should exist left and right if we don't have the check.

Only one instance of an unbound variable was found in the dialect's standard library: and that was a function that was added as an afterthought and never tested. The problem reproduced 100% when calling the function in any correct manner whatsoever; the function was completely broken. If it had been called just once after having been written, the problem would have been caught.

I fixed the problem so that the warning went away and caught myself almost starting to think about commit the fix, when I realized: what am I doing? I still haven't called the function. Gee, it now passes the one feeble static check that was added, so it must be correct? Haha.


For me the only downside of strong typing is that code takes longer to write. It's reminiscent of my years writing C++ where often fiddling around with header files, type declarations and templates would seemingly take up more time than the actual logic of the program. For the last few years I've been working in Scala and compared to Clojure it can take longer to knock out working code. That said with the help of type inferencing and IntelliJ it is nowhere near as slow as C++ was, and now I find that the benefits of strongly typed code greatly outweigh the costs in terms of syntax and development speed.


> ... type declarations and templates would seemingly take up more time than the actual logic of the program....

Just a small point: type declarations are part of the actual logic of the program. In type theory, types are logical propositions and values are proofs of those propositions.


This is really interesting idea. It feels obvious when you get it but I've never thought about it and I've never seen it before.


> That feeling is poorly supported by reality; it is the "safyness" of static typing.

Has that become a thing now? :))


Theres the new clojure.spec[1] in 1.9 and core.typed[2] adds an optional type system as well.

[1]: http://clojure.org/about/spec [1]: https://github.com/clojure/core.typed

The REPL-driven development of Lisps makes types not as important because everything is interactively tested as you write it. Having an optional type system makes it much easier to start with dynamic types and gradually add type annotations as your architecture stabilizes.


Conversely, lack of mutability makes dynamic typing much easier to reason about. Since variables typically can't be changed via side effects, you can do local reasoning about your code vast majority of the time.


> Conversely, lack of mutability makes dynamic typing much easier to reason about.

Dynamic typing does not imply lack of mutability. Those are unrelated concepts.

Python and Ruby are dynamically typed but both are certainly mutable.

Conversely static typing actually makes it easier to reason (locally or not) about things, since you always know the type of everything.


I think he meant it in the context of Clojure, which has immutable values.

I don't feel static typing makes it easier to reason about, at all. Reasoning about values and their transformations is more important to me than reasoning about types, and since values carry types you're actually working with more information. Most of the time the types I'm interested in are closer to concepts (sequences, mappings) than concrete classes.

Thats where having immutable dynamically typed values is better than having statically typed mutable ones.


> since values carry types you're actually working with more information.

Only for the specific value that you are reasoning about. If you want to reason about all possible values, then you are de-facto reasoning about their types.

> Most of the time the types I'm interested in are closer to concepts (sequences, mappings) than concrete classes.

But sequences and mapping are also types, right? You can abstract types to type classes (i.e. whole classes of types, e.g. all types that can be iterated over, or all types that are ordered etc.) and also reason about them.

> Thats where having immutable dynamically typed values is better than having statically typed mutable ones.

Sure, ideally you want to have immutable statically typed functions and values, since you can then do some equational reasoning and prove certain properties of your program.


> If you want to reason about all possible values, then you are de-facto reasoning about their types.

True, but a variable can only ever have one value at any given time; knowing a variable is an int is good, knowing a variable is the same int value throughout its extent is better :)

> But sequences and mapping are also types, right?

Yes, but they're not concrete types and don't map to a single interface either. When they do these interfaces are compositions of smaller interfaces anyways and I prefer to reason about these smaller parts.

Which means I'm reasoning more about the shape of the data than the actual type implementing it, and that to me is closer to a value than a type. Its the same with type-classes, you're reasoning about the features of a value rather than the specific type implementing said features.

> ideally you want to have immutable statically typed functions and values

For functions we're talking about purity rather than immutability (which qualifies variables). Type systems also are very leaky abstractions; null pointer exceptions, can't limit the range of value, requiring more code which introduces its own bugs and complexity and more.

From experience I prefer a smaller dynamically typed codebase that has tests for the trickier parts over a statically typed codebase (even if tested). The productivity difference is startling and the resulting quality is about the same.


> ... I'm reasoning more about the shape of the data than the actual type implementing it, and that to me is closer to a value than a type.

The shape of the data is its type. You have been reasoning about types all this time. Check out https://en.wikipedia.org/wiki/Structural_type_system


I never said dynamic typing implied lack of mutability. What I meant is that Clojure defaults to immutability, so reasoning about types becomes easier than in a language with mutable data.

In a language like Ruby or Python, it's possible for a side effect to change the type of the variable you're looking at externally. This cannot happen when you're writing idiomatic Clojure.

Static typing comes at a cost however. You have to figure out how to encode the problem using types. Effectively, you have to prove to the compiler that your code is self-consistent. Proving something is often much harder than stating it.

At the same time, static typing doesn't guarantee semantic correctness. So, it doesn't actually tell you that your code is correct in any meaningful sense.

I think that Clojure Spec packaged with the 1.9 release, provides a much better way to catch errors than types. The main reason being that it focuses on ensuring semantic correctness.

For example, consider a sort function. The types can tell me that I passed in a collection of a particular type and I got a collection of the same type back. However, what I really want to know is that the collection contains the same elements, and that they're in order. This is difficult to express using most type systems out there. However, with Spec I can just write:

    (s/def ::sortable (s/coll-of number?))

    (s/def ::sorted #(or (empty? %) (apply <= %)))

    (s/fdef mysort
            :args (s/cat :s ::sortable)
            :ret  ::sorted
            :fn   (fn [{:keys [args ret]}]
                    (and (= (count ret)
                            (-> args :s count))
                         (empty?
                          (difference
                           (-> args :s set)
                           (set ret))))))
The specification will check that the arguments follow the expected pattern, and that the result is sorted, and I can do an arbitrary runtime check using the arguments and the result. In this case it can verify that the returned items match the input.


> ... you have to prove to the compiler that your code is self-consistent. Proving something is often much harder than stating it.

Not really. When you learn the properties of your type system, you can 'prove' your way through most day-to-day programming tasks fairly easily. And when you can't prove something to the compiler, you can just fall back to telling it using its 'dynamic' escape hatch. You're certainly no worse off than you are with pure dynamic typing.

> ... Clojure Spec ... focuses on ensuring semantic correctness.

Nothing in a static type system precludes something like Clojure Spec, or at least something very much like it. You can still fall back to runtime tests that your function works according to spec. In fact that's what Haskell's QuickCheck and related generative testing systems are all about.


Indeed, they are distinct concepts. However, in Clojure, only a small handful of datatypes allow mutation (sorta like refs in ML, but thread-safe and with various semantics depending on how that thread-safety should be handled). All of the usual datatypes are immutable (lists, vectors, hashmaps, sets, strings, etc.). If you need mutation, you need to wrap your object up in a "mutable box".


Typed data was already possible with schema [1], which is now maintained by the plumatic (former prismatic) team. Which also says something about the way Clojure is awesome. Everything is optional, you arent forced to use anything to get to a working solution. Stuart Halloway and Rich Hickey also have some great talks on this subject. If you are interested you might want to check out "Radical Simplicity" [1] by Stuart and "Simple Made Easy" [2] by Rich to see why Clojure wipes the floor with almost any other programming language, especially the likes of C# and Java.

I am not surprised at all Bob Martin loves it. Any principled software engineer would.

[1] https://skillsmatter.com/skillscasts/2302-radical-simplicity

[2] https://www.infoq.com/presentations/Simple-Made-Easy


> "the feeling you get with OCaml is that once your program compiles, it most likely also runs correctly"

The feeling you get with Clojure is that, when you write and test each function (using CIDER) and get live feedback to make sure they work, you can combine them into bigger functions that most likely also run correctly.


"baroquesque abomination" is a very colourful but somewhat unfair description of Scala. I've been using it in production for a few years now and I find it can be quite pretty if you're careful, and is quite readable once learned. Not only that but after years of Common Lisp and Clojure programming (at hobby level) I never really satisfactorily lived the dream of the programmable programming language that lisp offers. With Scala I find it has capable facilities for writing DSL's without even resorting to compile type macros. I do agree the syntax can be terrible especially when people go crazy making DSL's for libraries or make their code too dense and unreadable.


One issue with Clojure is that it comes with strong opinions (STM, immutable data structures, JVM-Java ecosystem) and thus is not as paradigm-enclosing as other Lisps (e.g. Common Lisp).

I'd much rather have SBCL's native code compiler, read/compiler macros, conditions and restarts (that I end up using on pretty much every project) and optionally use libraries for immutability and STM (if and when I need them), than compromise from the get-go and use a language that reduces the set of available options by forcing its specific worldview.

If I do need a strong focus on concurrency, I find Erlang (and also Elixir) a much more coherent solution. The cognitive dissonance that comes from having to interact with Java when using Clojure is very damaging and it can't be abstracted away. Just look at Clojure stack traces.

So to end with, Clojure proponents should understand its limitations and design choices. It was created because Hickey needed a language to solve problems he was having in his specific arena (delivering concurrent applications in the Java ecosystem with his consultancy) and not really to improve on the Lisp state-of-the-art side of things. If you're a Java guy and absolutely need to stay in that ecosystem, I guess you can go with it. Otherwise, you end up giving away too many things. This has been validated in practice, in the Common Lisp community. We get a lot of guys who are coming _from_ Clojure, but it is rare for someone to move _to_ Clojure from CL.


> It was created because Hickey needed a language to solve problems he was having in his specific arena (delivering concurrent applications in the Java ecosystem with his consultancy) and not really to improve on the Lisp state-of-the-art side of things.

It's true that Clojure isn't trying to be SBCL, but you're mischaracterising what Clojure is designed to do.

Clojure is based around the idea that good code minimises interconnections. In Clojure parlance, "complex" code is code that is very interconnected, and "simple" code has few interconnections. Clojure is designed to make it easy to write "simple" code.

CL programmers tend to look at Clojure and say "Why should I use Clojure when CL is more powerful?", but Clojure isn't trying to be more powerful than SBCL, because sometimes adding power means sacrificing simplicity.

Clojure is an insufficient CL, but CL is an insufficient Clojure.


> The cognitive dissonance that comes from having to interact with Java when using Clojure is very damaging and it can't be abstracted away. Just look at Clojure stack traces.

This is what has kept me from really feeling comfortable getting into Clojure. Unfortunately there are so many downsides in the Functional programming world :( It's either:

1. Have a strange interrop (Clojure with Java, and even though Elixir with Erlang isn't as bad, it still bothers me)

2. Lack of mainstream adoption (e.g. Haskell, yes FB uses it for their spam filters.. but mostly working developers predominantly use other languages)

3. Lack of clear tooling choices or implementations (Haskell's stack vs. platform, or Scheme is great but no one popular implementation)

(I know I'm being quite pessimistic)


What's so weird about clojure interop ? I rarely find myself going back to write full java code when in a repl.


> The cognitive dissonance that comes from having to interact with Java when using Clojure is very damaging

I don't use Clojure, but isn't relatively seamless Java interop -- and thus access to the massive Java ecosystem -- considered one of the big pragmatic advantages?

> Just look at Clojure stack traces

There are prettifiers that clean this up, but I understand your complaint is cognitive dissonance. The Java guts are right there, and I guess this is the heart of that matter. Some don't like chocolate in their peanut butter.


I really tried SBCL, but even knowing a little Lisp (own about 6 books I've read most of), I couldn't get past the tooling. SLIME+emacs is powerful, but the tutorials are awful and very lacking. Sadly, Clojure isn't much better here. Racket is pretty good here, but I can't get past the fact that I'm essentially playing with an educational product and not a real industrial language.



Can't you just use whatever environment you're familiar with while getting used to the language?

Developing in Common Lisp is perfectly doable by editing then saving a file and reloading it in, for example, SBCL.


You can try Chicken Scheme, which has the added benefit of great C interop (because that's what it compiles down into)


Try Cursive on IntelliJ. It absolutely rocks.


STM seems pretty much forgotten in Clojure-land today.


That is not true. It is a vital part of the language. It is not often talked about because its not 'flashy' anymore. If you want to keep some data in a safe way you have a easy language tool to do that for you. There are also more advanced usages. People use it, and it works.

I think Agents are more on the way out because of core.async.


For me, Clojure is the most satisfying language to write in. The design of the language feels impeccable most of the time. Working in the repl, with all of your application code loaded into the runtime, sitting in an adjacent terminal or editor tab, is immensely satisfying, much like the experience of a craftsman working with hand tools. Unfortunately, I find Clojure code to suffer in terms of readability, and I think it suffers from the same problem all dynamic languages suffer from, they just don't scale to larger applications and teams as well as static languages do. Proof of this is the fact that pretty much every dynamic language out there eventually looks to graft something like a static type system on after the fact. Basically I think people advocating Clojure outside the context of small, high ability teams are (mistakenly imho) prioritizing write mode over read mode.


au contraire - many find Clojure code extremely readable (even compared to other Lisps). It's possible to write some obscure functions in any language, developing good taste and habits to write nice and reasonable code is just a matter of practice. I believe Clojure gives you enough discipline to learn that fast. It just seems there are certain types of devs who are inherently dyslexic - they struggle to read Lisps. Where a Lisp developer sees structure and beauty they see nothing but quivering beehive of parentheses.


They left out the part where Clojure DOES NOT implement tail-call optimization for mutually recursive functions. Without TCO, you can't fully leverage functional composition and immutable data structures without introducing some ugly hacks or relying on other language-level constructs offered by the implementation--that they can't be implemented in the language strongly suggests something about the limitations of the language.[1]

A great language that does optimize mutually recursive tail-calls, among many other elegant features: Lua!

... TCO, asymmetric stackful coroutines (90% as powerful as call/cc, but with zero calories), lexical closures, prototype-based object orientation, duck typing-based object orientation (among other possibilities), and a first-class C API that allows C code to work cleanly with coroutines, closures, the object system(s)....

Lua is truly multi-paradigm. The only downside is dynamic typing, though that's not always a liability and often an asset. Also, Lua has other interesting features, like lexical global namespace substitution using _ENV, that permit devising solutions to minimize some of the headaches of dynamic typing. And because Lua is so powerful in so many dimensions, yet so simple and tiny, writing unit and regression tests is often a breeze.

Once you add the canonical extension module LPeg into the mix (written by one of the Lua co-maintainers), Lua is about as formidable as they come. Only Perl 6 comes close to the ease of writing parsers with Lua+LPeg.

[1] The JVM will eventually add support for implementing TCO in Clojure and other languages, at which point many Clojure proponents will find religion in the power of TCO.


Clojure does actually support TCO, it doesn't support automatic TCO because of the JVM limitations. And after using explicit TCO calls in clojure for a while I actually prefer it now over automatic TCO. Because I can clearly see from reading my code where tail calls are optimized, and where they build up stack. I don't have to try and work out if the recursive call is in the tail position. It's all explicit.

For mutually recursive TCO there's `trampoline`

https://clojuredocs.org/clojure.core/trampoline


Agreed that explicit is better than implicit but I can't imagine when you might want tail calls to build up stack. And if you do have a use case for it in mind, wouldn't you want to be explicit about that and have TCO be the default?


Well that (default being TCO, and explicitly declaring a stack call) would mean every normal function call (the vast majority) would need to be explicitly declared (call func args...), and the implicit case would need to blow up everywhere it's invoked in a non-tail position.

What I was getting at is: in an automatic TCO language you don't actually know if the call is optimised or not. You might think it's optimised, but it isn't, because it's not in the tail position. You don't find this out until one day in production the stack explodes. The only way to know if it's optimised is to determine if it is in the tail position. That sometimes is straight forwards, but sometimes requires careful thought. It is certainly not always obvious.

Additionally, another coder can come along and break your tail position. You've gone "return func()" and it's TCO and then later someone changes it to "return 1 + func()" and now the call to func is not TCO, but there is no warning and no obvious outward signs that this has changed.

In the explicit TCO of clojure, if the second programmer adds the extension the compiler will explode with something like "recur not in tail position", immediately informing you that this has happened, and then you may either refactor to put it back in the tail position, or change it to a full stack function call once you determine that building up stack is ok in this case.

Now I think loop/recur was implemented in Clojure as a work around the JVM limitations, but in doing so I think Hickey stumbled onto a really great new syntactic formulation for TCO. Explicitly declared TCO. I like it.


Clojure's loop/recur is pretty much a degenerate version of Scheme's named let, which has been part of Scheme since long before Clojure was a thing:

  (define (sum xs)
    (let loop ((acc 0)
               (xs xs))
      (if (null? xs)
        acc
        (loop (+ acc (car xs)) (cdr xs)))))
It's often used in Scheme for iteration, including cases where you would otherwise need to define a helper function to implement an iterative (tail-recursive) version of a function, as I did with sum above.

The named let construct is more general than loop/recur, since you can supply a name and properly nest the forms.


This is why I like Scala's approach so much.

It will TCO automatically. If you want to make sure you haven't made a mistake, simply add the @tailrec annotation, and the compiler will give you an error if it can't TCO.


That sounds pretty good! I will check that out.


I want useful stack traces by default - the cases where I most need a stack trace are usually precisely the cases where I didn't think much about that function when writing it.


You probably wouldn't ever want to not optimize, but the point is that, by being explicit, you know what parts are and which are not.

If you think about it, you can't really do the opposite, because it is not possible to TCO every recursive calls.


TCO is possible in JVM but not in Java. Clojure is just Java library and share all Java problems like TCO, Class sizes or inline limits.

I no longer believe in transpiration or hosted languages. Proper language have LLVM backend. Look at Rust, Swift, Crystal, this is future.


Or you use a VM that was build for that cough cough erlang cough cough


But Clojure and Scala choose to sell theirs souls for Java popularity.


Is "selling their souls" really what they're doing? When writing a new language shouldn't one consider its adoption by the community to be a key part of its success? The purist would say all that matters is sticking to a core philosophy while the pragmatist is willing to make some concessions for long-term viability and adoption so long as it doesn't completely undermine the purpose of the language. Essentially, don't you think Hickey, after several failed attempts at writing new programming languages, had given quite a bit of thought about what makes a language successful (and useful) before making the decision to make Clojure JVM based? Of course he did; even explaining the factors that went into the decision. That's not "selling your soul", it's a practical decision that comes from a lifetime of experience.


Because Clojure is not new language it is another Lisp. Language adoption should not be primary driver.

Clojure is more Java based than JVM based. This is main issue. It inherits all Java limitations and share problems inherited from Lisp world. https://github.com/clojure/clojure/blob/master/src/jvm/cloju...

By selling soul I mean: 1. Clojure will forever have ugly/broken stacktraces. 2. Inline/Class limits on JVM will not disappear. 3. Half of Clojure libraries are just wrappers. 4. Lack of TCO and other recursions optimisation. 5. Performance will be always worse than Java 6. Type system with garbage in -> garbage out approach


Rust is a systems programming language. What makes Swift and Crystal more "proper" than JVM languages?


LLVM is "proper" back-end for code generation that produce native executable optimized for any platform.

JVM execute bytecode in Java Virtual Machine. JVM abstract your hardware and have specific runtime that needs to be present.

LLVM is superior model because its avoid additional compilation and can in theory provide much better performance.


Actually, the JVM can in theory provide much better performance, because code is specialized as it runs. Statically compiled code must be produced to handle the worst-case usage.

As for the runtime being present, it's more of a packaging issue. A language like Go has a runtime that's similar to Java's (except it doesn't JIT-compile), but it is statically linked with your binary. This can be done in Java, too.

I must say that it doesn't seem like you're very knowledgeable when it comes to compilation techniques. LLVM and the JVM do make different tradeoffs, but they're not at all the ones you describe.


> JVM can in theory provide much better performance, because code is specialized as it runs.

Only better if you don't include the runtime compiling (start-up time).


The JVM is usually used for long-running applications so the warmup (not start-up, which is 60ms) time doesn't matter. It's not a great choice for short-lived command-line apps.


The JVM is usually used for long-running application because the warmup is so slow. Were it not so slow, it would be used for things other than servers.


Maybe, but most large, "serious" software is long-running anyway. There is interesting work on reducing warmup (because some banks doing HFT are willing to pay Oracle to do that), but there's not much interest. Not all niches of the software world are equally interesting, and besides, there are some incumbents that are hard to unseat in those smaller niches so it's questionable how much effort should be put into competing in niches that are either small, have well-entrenched incumbents, or both.


You express very strong opinions about Clojure and the JVM in particular, in comparison to the LLVM. Would you mind sharing some particulars of your experience with the two and what domain you work in?


> Without TCO, you can't fully leverage functional composition and immutable data structures

Look, I like tail calls as much as any language geek, but this statement is just plain silly. The idea that TCO is needed to "fully" leverage anything is silly. Never mind that necessitating so-called full leverage is pointless, as even supposedly partially leveraged anything can be useful. In the case of Clojure, function composition and immutable data structures are highly leveraged and, after learning the seq abstraction and lib functions, you won't even miss tail calls unless you try to write a direct-style interpreter for a language that supports recursion.


> The idea that TCO is needed to "fully" leverage anything is silly.

Functional programming relies completely on recursive functions. You just can't do "full" FP without tail call optimizations.

Now, I do side with the other poster up there that is talking about jumps. You don't even need jumps actually, you can use inlining and loops too, but they do make the compiler's life easier.


I think that Clojure's success is proof positive that explicit tail recursion and lazy sequences is sufficient to do meaningful FP in the absence of proper tail calls. However, even if you insist that it is somehow not "full", who cares? Clojure is about practicality, not appeasing some platonic ideals. And I say this as some one who would very much appreciate having proper tail calls.


Of course, having lazy sequences and functions for manipulating them is better than not having them.

Yet that's such a small part of what FP has to offer that it isn't funny.


¯\_(ツ)_/¯

Extremist views are not particularly useful in engineering, despite their utility is science.


Perhaps TCO is not required for idiomatic Clojure, but for advanced functional programming using e.g. Monads or CPS, TCO is absolutely necessary in practice.


The fact that monads are not popular in Clojure is a good thing, IMHO. Besides, test.check and clojure.spec get by just fine with macros for generator monads. It's not quite perfect, and everyone would like proper tail calls, but it's far from an impediment for the overwhelming majority of tasks tackled with Clojure.


I have already stated that it is quite possibly not a problem for Clojure, although this hasn't stopped all the downvotes I keep getting (which doesn't leave me with a good impression of Clojure users). My point is that, despite the claims of the parent post, TCO is needed for many advanced functional programming techniques.


You're being down voted because you keep repeating a claim without evidence despite both evidence to the contrary and acknowledgement of the utility of tail calls.


No, I have been disputing the following parent claim:

> The idea that TCO is needed to "fully" leverage anything is silly.

I provided two examples, monads and CPS transforms, which require TCO to fully leverage in a strict language. But really any example where functions are dyamically composed as a pipeline needs TCO to avoid leaking stack. Your examples of a couple of test libraries in Clojure hardly constitute evidence that TCO is of limited utility.


> any example where functions are dyamically composed as a pipeline needs TCO to avoid leaking stack

Sure, but in the rare case that this is a problem, you can trivially workaround it:

    (reduce (fn [acc f] (f acc)) init [f1 f2 f3])
This approach may not match the Haskell ideal of FP, but it's actually got many significant advantages. Frequently, programming with data structures is _better_ than programming with function composition. For example, you may choose to do something like this:

    (reduce (fn [acc f] (println "executing" f) (f acc)) init [#'f1 #'f2 #'f3])
Where this is now a pipeline with logging of the stages. You can't do that if you prematurely select function composition as your primary way to structure computations.

The Haskell research community is in the process of undoing this brain damage on the monad front by embracing extensible effects, free(-er) monads, "reflection without remorse", effect interpreters, etc.

Once you need any sort of symbolic reasoning, you drop opaque composition and work with data. This is true in every language, FP or not. People tend to do exactly the right thing already: abandon function combinators and write interpreters over data structures.


I disagree it is always trivial to work around. Typically trampolines are used, but these have a large efficiency hit, up to 1000 times slower. Not a problem in many scenarios, but clearly not good for e.g. a concurrency monad. For your pipeline example, you gave a trivial statically determined one. Try an example where each function decides the transistion (tail calls the next one). This is of course also what my Monad and CPS examples in essence do.

Regarding your comments on replacing function composition with data structures. In essence, this is what trampolining does. But it does have a performance cost and is not appropriate in all cases. IIRC correctly the performance of "Extensible Effects" is still an outstanding issue. Incidently, the primary author of that work, Oleg K, has published a lot of work in which he seeks to avoid intermediate data with "finally tagless".


Clojure has TCO, though. This is what recur is for.


No Clojure does not have TCO, due to the JVM. It allows you to manually optimise self-recursion via recur. This is not enough for many advanced FP techniques, such as CPS. For example, using CPS and TCO, I can traverse a tree in constant stack space.

I agree with the parent that lack of TCO in Clojure is probably not an issue for most.


If you are traversing a tree in constant stack space, the closures your continuations are holding on to must be O(ln) in space and representing what would have been on the stack, right?


Yes absolutely, I'd be trading stack for heap.


Lua isn't homoiconic and doesn't have LISP style macros; yes there's this or that patch available, but like all other attempts at implementing macros in other languages, it never comes close to being as good as what LISP does naturally, because in LISP code is data literally.

In other words, Lua has nothing to do with LISP, except that it's your favorite, hence I don't understand this comment.

As for TCO, it's regrettable that the JVM runtime doesn't provide support, however you can work around it by implementing trampolines, which is how developers of Scala, Clojure and PureScript have managed to live. For example you can implement a fairly efficient and lazy IO type, like the one in Haskell, which then gives you memory-safe tail-calls for free. Examples include the Free monad or the Task type from Scala.

The only downside of this approach is that it involves effectively building your own call-stack and keeping it in heap, which stresses the garbage collector and implies some indirection. But then again, if you'll look at the TCO performance of platforms supporting it (e.g. .NET, ES6), TCO calls have a performance penalty which can be quite significant, so I'm not sure that you'd gain much in terms of performance.

And if you measure the performance of FP-heavy Scala and Clojure, you'll see that it's not bad compared with Scheme implementations or Haskell, in spite of trampolines being used for tail-calls.


You have listed the reasons why I wrote a language that's a superset of Lua and Lisp, allowing lisp style macros to manipulate lua code as data. http://github.com/meric/l2l


I'm getting a bit tired of the "we don't have TCO because the JVM doesn't support it" argument. You know what doesn't have a tail-call construct either? Every hardware architecture in existence. You know what they have though? They have jumps. That is all a TCO is. It turns a recursive construct into a looping construct. And it's a really simple optimization - just update the local arguments to their new value and jump back to the start of the function instead of a call + return.

No, Clojure doesn't have TCO because of a design decision. We can discuss the rationale behind that decision and the merits of it, but claiming it has anything to do with the JVM is dishonest.


> You know what doesn't have a tail-call construct either? Every hardware architecture in existence. You know what they have though? They have jumps.

Guess what the JVM doesn't have. Remember also that the JVM was originally designed for running untrusted bytecode, so it doesn't generally let code directly manipulate the call stack.


> Guess what the JVM doesn't have.

https://docs.oracle.com/javase/specs/jvms/se7/html/jvms-6.ht...

> so it doesn't generally let code directly manipulate the call stack.

You are doing something wrong if you need to manipulate the call stack to do a tail call. You just have to update the local arguments and jump back to the start of the function. It is literally just turning the call into a loop.


That works for tail recursive functions, but not tail calls.

TCO works for all tail calls, not just the recursive ones, so that jump would be to another function's body. The current stack frame may not have enough space for the new function's.


It's true, but you probably don't want TCO for all tail calls since you lose information in stack traces.

As long as you have control over all the code generated you can in principle compile multiple functions into a single one and just add some stubs for interop purposes. This of course complicates how you track function pointers (or equivalently) and may have some overhead.

Still the cases where you can use recur today could be done automatically by the compiler without much trouble.


Good point!

One worry I have about automatic TCO where recur/loop currently is used is with Lisp's dynamic nature. What happens to everyone who has a reference to the fn when you redefine its def?

You're basically switching (defn f [] (recur [] (loop))) to (defn f [] (f)). What if I do (def g f) and then (def f +)? Will the original f now recur into + when I call (g)? If it still calls into the original f then that creates a mismatch between reloading normal and recur'd functions adding complexity to a language striving for simplicity.

I'm probably overthinking it :)


If you want late binding at the call site it's probably unavoidable to have some extra complexity anyways, what if I changed f into something that wasn't callable?

However Clojure binds at the definition time, e.g.

    > (defn f [] 42)
    #'sandbox9516/f
    > (f)
    42
    > (def g f)
    #'sandbox9516/g
    > (g)
    42
    > (def f 1337)
    #'sandbox9516/f
    > (g)
    42
    > (f)
    java.lang.ClassCastException: java.lang.Long cannot be cast to clojure.lang.IFn


That's because defn binds f to the _value_ of the function, while (f) binds to the _variable_.

  > (defn f [] (f))
  #'sandbox9672/f
  > (def g f)
  #'sandbox9672/g
  > (def f 42)
  #'sandbox9672/f
  > (g)
  java.lang.ClassCastException: java.lang.Long cannot be cast to clojure.lang.IFn


It's not that dishonest. If the JVM supported TCO natively then it would be a no-brainer for Clojure. The alternative is to give up on seamless interop between Clojure and Java libraries.


> The alternative is to give up on seamless interop between Clojure and Java libraries.

Why? The only issue I can see is that you can't tail call to java code, but that would only be a problem if you are trying to do mutual recursion involving both Clojure and Java code.

For multiple Clojure functions with mutual recursion you could just compile all of them into a single function with a state argument to select which function to go to, and then add stub methods that just calls into the combined function with the correct value for the state argument for interop with java. The only artifact of that would be an extra method with a synthetic name in the stack trace.


Stubs == seams, IMO. Self calls are certainly not a problem, as you pointed out, but in that case recur is a pretty elegant way of supporting that while not making Clojure appear to support something it doesn't.

EDIT: Also what you suggest would imply huge changes to the way code is evaluated in Clojure - for example, I can create functions in the REPL and they work the same way as reading functions out of a file. How would that work in your scenario? I think every function would actually have to have a stub not just the mutually recursive ones because you might not know until later which functions to combine.


The ML family of languages and the Lisp one, with exception of Scheme, don't require TCO.

So none of them are worthy of being FP languages according to you.


I think you'll find that you can rely on TCO in all of the languages you mention. Certainly Scheme, Racket and most ML languages and their implementations. Common Lisp does not require it but the most popular implementations (CMUCL, SBCL, CCL, Allegro, LispWorks) all provide full TCO.


I think pjmlp knows about this. The important word was "require", which makes a lot of difference. If you don't guarantee TCO at the language level, you can afford to emit code that performs other useful things (like debugging) while remaining conform to the language specification.


Then your code will become implementation dependent and fail spectacularly when trying to port it.


SBCL does TCE on some circumstances/optimization levels, not on every instance. Nor is it portable nor idiomatic CL. (Although I certainly write code that depends on it sometimes)


>> ...Only Perl 6 comes close to the ease of writing parsers with Lua+LPeg.

I would argue that it doesn't get any easier than using Rebol/Red parse dialect - http://blog.hostilefork.com/why-rebol-red-parse-cool/


You can use a small patch to add the remaining power of call/cc to Lua. I've thought I would need to do this for a game, because I was using both "state stored in execution contexts" and "speculative execution of the game's update() function while waiting for remote players' inputs". I didn't end up making that game though :(


It is not a limitation of Clojure but a limitation of the JVM. And you can use the recur keyword to actually do TCO.


Recur only handles a limited case of tail call elimination, not when the function calls another function on the tail position. Also you can implement TCE on the JVM Kawa Scheme does, just not efficiently. It is a limitation resulting from the design decision of Clojure.


I think my language of focus in 2017 will be Racket. It seems to be the perfect combination of optional (but somewhat static!) strong typing, highly advanced features and cool Lispness without becoming impractical.

Minilanguages... for everybody!


I have a comment on Racket below. I like the concept, but it seems to be pretty slow. I'm also not sure how many mini-languages are out there. What the "Red" language team is doing seems to be a similar concept. That makes sense as Racket is a Lisp and Red is mostly from Rebol, but has a native code compiler and comes with a tiny .exe. Their GUI engine is phenomenal.


On Scheme benchmarks, Racket does fairly competitively, probably due to its JIT:

http://ecraven.github.io/r7rs-benchmarks/benchmark.html

(Chez is still freakishly fast.)

However, if you're writing code in DrRacket, it has full debugging information enabled, which may slow the code down. So your REPL performance in DrRacket may be slower than if you ran Racket from the command line.

https://docs.racket-lang.org/guide/performance.html


The debugging slows the code down a ton in my experience, and it also prevents futures from running in parallel. Trying out the futures code from the guide, failing to disable debugging slows down the final Mandelbrot example by something like 100x on my machine.

Fortunately, DrRacket's friendly GUI makes it pretty easy to toggle debugging. I leave it in place most of the time, but occasionally the slowdown is just unbearable.


Thanks for the tips! I'd like to use ChezScheme, but haven't found a good way to get into it.


There should be a rule that if one writes a post saying language X is faster than languages A,B,C there should be some real world benchmark to add data to the discussion.


And benchmarks with rigor, I might add. Noting important dependencies that match chipset, bare metal CPU architecture/bus wiring and compiler choices.

In particular, I've spoken with people who have cited significant differences in performance, when using proprietary compilers matched to vendor CPU chipsets, for Linux/OpenJDK compilation, which then changes the profiling of the JVM benchmark, on the same bare metal crate.

Meanwhile, L2 cache misses, due to object reference churn, are the final deal breaker with the JVM, if I've read correctly and read enough about JVM performance. Beyond even garbage collection overhead, fundamental JVM performance cannot be tuned to optimize CPU cache use.

Briefly paraphrased, the JVM performs messaging/referencing with String references, that look like "java.lang.Object@%HASH_CODE%" and when you instantiate and garbage collect many instances (millions of those hellishly long fully qualified java names) across many operations, costing many cycles, you'll always suffer incurred latency, because most of your objects are frequently populated and then evicted from the L2 cache, no matter how long they actually live in the JVM heap.

I don't have the link handy, or I'd post it.

But long story short, even though a JVM benchmark may be performant, benchmarks do not peg CPU resources according to real life use cases.

And changing the OS, motherboard, and compiling from source on your own physical hardware can provide benefits that might not be obviously worth the extra effort to compile from source bootstrapping all the way from the kernel to the JVM, before deploying your java artifacts.


That a fair request, but the article doesn't say Clojure is faster than any other language, it just says it is fast. That caught my attention as well, because I don't think of it as being particularly fast (mostly due to startup time).


> because I don't think of it as being particularly fast (mostly due to startup time).

Proposal: Fast for server use-cases?

Curiously, I was never really interested in the startup time of most of my code, probably because most of my code is long running where it doesn't matter if it takes a few seconds more to start.


There are a few efforts out there to provide something with near instant start up time. Can be useful for scripting, for example.

Planck, which runs on JavaScriptCore: http://planck-repl.org/

Lumo, which runs on V8: https://github.com/anmonteiro/lumo


I was not interested in that either until I had to start Java processes by hand. The initial version ran for 5 minutes. I then refactored the app to be a long running service instead of a fire-and-forget app and the run time changed from 300 seconds to 300 microseconds!


Five minutes? Ouch .. when I think of "slow" start of Java processes it is more in the range of 5 to 10 seconds. Maybe a minute tops for an application server, but five minutes sounds ugly. Especially when it is all just start time.


Startup of a single JVM was around 10secs but I had to start up a JVM for each operation (think of it like a JUnit test suite from the command line). If you have several of these processes the whole thing is no longer O(1) but O(n) and you begin to see the JVM startup overhead.


Probably being compared to languages like Python, Ruby, and Perl. It should be pretty close to statically typed languages on the JVM like Scala. I think a real REPL is worth a little bit of performance.


FYI http://jpad.io/ REPL for java


Which are mostly worthless, because most keep mixing implementations with languages.


Being able to target servers, web applications, and mobile apps is compelling enough for me (at least for personal projects). Throw in the simplicity of the syntax and the low dev iteration times and it makes me have to ask "Why not Clojure?" when starting something new.


>and mobile apps

Last time I tried (which was probably over 2 years ago by now) the CLJ android had a huge loading time to the point of not being usable for anything serious (it offered no tangible benefits that would justify the load time it had) and iOS was something based on that JVM for Android port that Xamarin bought then shut down. Has that changed ? Or are you talking about CLJS + ReactNative and similar tech ?


Yeah I'm talking about react native. I'm on mobile at the moment but look up "re-natal", it's built on top of react native with re-frame, it's superb.

Edit: https://github.com/Day8/re-frame

https://github.com/drapanjanas/re-natal


I had a BA student write his thesis on this matter. What if I want to write a system that has a web app, api, iOS and Android app? How can we mitigate the gazillion different languages and paradigms?

Turns out, it's easy with Clojure. Have even written desktop apps with it since.


Is that thesis publicly available? I'd love to read it.


The jvm platform has become (again) the most exciting platform to work with in recent times. And my favorite there is Kotlin.

kotlin 1.1 (still in milestone) is a brilliant and compelling language to use on the JVM.

Spring 5 is a very "functional" web framework that will come out in a few months with first class kotlin support (even today, it is fairly excellent [1])

Vert.x [2] has incredible support for Kotlin and is coming up with built in kotlin support. Coroutines got merged fairly recently [3]

Reactor (https://projectreactor.io/) and Rxjava have had kotlin support for a long time.

The tooling is excellent (umm.. it was developed by Jetbrains).

The killer app/functionality ? Android. Kotlin is the swift of Android and that's where its uptake is coming from.

[1] https://github.com/sdeleuze/spring-boot-kotlin-demo/tree/all...

[2] http://vertx.io/whos_using/

[3] https://blog.jetbrains.com/kotlin/2016/07/first-glimpse-of-k....


Full title: Why Clojure is better than C, Python,Ruby and java and why should you care

I recommend updating the title to "Why Clojure is better than C, Python, Ruby, and Java"


The original title is more of a clickbait. The article highlights good things about Clojure but it doesn't tell anything about the languages mentioned in the title.

The current one is much better.


It also agrees with what the original post was entitled.


That was the original title but someone changed it. A mod maybe?


Fwiw, here's a deep learning framework for the JVM that some Clojure people are using: https://deeplearning4j.org/ I bring it up not just because LISP was conceived as a language for AI, but because of the points about distributed computing made in the post. Multithreading is baked into the JVM, which is important because advances in AI are computationally intensive.


In what ways is Lisp / Clojure better for AI than say, Scala?


We had a majority of codebase in clojure back in 2011, but we found that that presented us with major roadblocks hiring new team members. Even some experienced people had hard time getting into and learning curve was non-trivial. So we dropped it. Prbly not a good idea to build a big company with not so popular prog language [1].

1. http://www.tiobe.com/tiobe-index/


My team has been using Clojure for the past 6 years, and we have the opposite experience. Most new developers we've hired haven't used Clojure before, but they were able to become productive doing useful work within weeks. The fact that we're using Clojure tends to be a positive factor for attracting candidates as well. A number of candidates we've interviewed mentioned that Clojure was the reason they applied for the job. They heard about the language and they were interested in trying to work with it professionally. In addition, we hired contractors and co-op students to work with our team, and they were able to pickup Clojure as well.

At this point I would actually use Clojure as a filter. I'm rather suspicious of any developer who wouldn't be able to learn Clojure.


How do you convince management to treat getting up to speed with a programming language (and your group's particular usage of it) as just another cost of onboarding? Or even better, how do you make the case to whoever is in charge of hiring that they shouldn't bias prior experience to the company's language over newcomers too highly when adequate proficiency is only a matter of a few weeks. All else being equal I think most hiring managers would naturally take the dev who already knows the language, but if it doesn't take long to learn the language up to the company's standards then it's not all else equal, all is effectively equal and another metric is needed.


I guess I'm just lucky to work at a place where technical people make technical decisions. The management does not decide what languages are used or how developers are onboarded where I work.


Where are you located?

It's hard to find people (a) willing to make the jump and (b) making the jump effectively. And when you do, you can maybe tap into x number of devs for your team, but trying grow to 2x devs is much harder.


I'm located in Canada, and work at a research hospital in Toronto. We have a number of companies using Clojure in town, and run a meetup. As part of the meetup we're also running a Clojure workshop, so that helps a lot.

I also found that hiring students for junior positions from university works great as well. They don't have preconceptions about how to do development, and usually have easier time picking up new paradigms than devs who've been working in a particular one for a long time.


curious, how big is your team size?

>but they were able to become productive doing useful work within weeks.

Sure anyone can learn clojure syntax in a day and write imperative code in it. But not sure if everyone can throw out their thought patterns from decades of OO and learn an entirely new paradigm of thinking about code.

One of the issue we had was precisely this, people who were just starting to write clojure were writing code that looked and felt totally different from people who have been doing it for years. Imagine paying this huge cost for every new dev in a large org.


My overall team is about 30 devs, and we're broken up into teams of 3-5 devs. My immediate team consists of 5 developers.

>Sure anyone can learn clojure syntax in a day and write imperative code in it. But not sure if everyone can throw out their thought patterns from decades of OO and learn an entirely new paradigm of thinking about code.

We don't have to hire everyone though. We just need to find one person who fits the team. A lot of developers nowadays are already familiar with functional patterns, so it's actually not that challenging to start doing stuff in Clojure.

We also do code reviews and pair up as needed. Developers are a long term investment, so spending a bit of extra time to get somebody onboarded is well worth the effort in my opinion.

We're using jira/bitbucket internally. Our process is that everybody works on branches, and when you finish a feature you tag somebody for a code review. We found that this helped us catch any non-idiomatic code fairly quickly.


This essay promotes the exact opposite: http://paulgraham.com/avg.html

So far I've only ever seen terrible CTOs and directors rely on TIOBE to the point I can't take it seriously whatsoever. Choosing a language based on TIOBE will yield you an endless supply of mediocre resumes for your job applications most of the time.

I feel like a company using a language they've carefully chosen for the problems they want solved with a strong training program for new hires will have a huge advantage over a company solving the same problems using languages naively taken from TIOBE. From experience, the later company will easily spend two to three times as much in development costs and time for a product of lesser quality.

If you go with an esoteric language and then find new hires who already know the language, you know they weren't told to learn it. They were passionate enough to learn it on their own and these hires usually are golden.


Has anyone build a company with > 15,000 employees using a niche language? ( or even 1000 employees )

I was using TIOBE as an agrument against a language for a huge company not as an argument for a programming language.


Not overnight, and companies aren't built starting at 15k employees.

If you're already at 15k employees then pilot projects are the way to go; gradually incorporate the new tech in the company using low-risk projects. Their post-mortems will provide loads of material to decide whether its worth it as well as how to plan for formations should you stick with the tech. I've done it before, although not at 15k scale, but the method should scale all the same.

If you're a startup, going with TIOBE will probably help your competitors more than anything else really.

All languages were niche at some point. OS X was built on Objective-C and Windows on C++. I'd argue they both were niche languages at the time. Google spawned Go. Facebook spawned Flow. Microsoft also spawned TypeScript.

Programmer time is an extremely underrated metric and these niche languages are all attempts at improving it. If they succeed they stop being niche languages, at which point anyone already using them have a huge head start.

Companies not focusing on improving it are doomed to be replaced by those who do. Just like companies who kept to assembly are now history.


You still haven't answered my question. Is there a single company on the planet that has scaled to big company using a niche programming language. MS/google and other companies inventing their own languages are not good examples. google/fb did not scale using golang/flow they used java/c++/perl/php.


iOS and Mac OS X were built and scaled on Objective C, which is highly niche in that it has almost never been used for anything else besides these projects, and it was even less popular before those projects existed. Objective C would never have been chosen with risk-based technology decisionmaking. They would've chosen C++ or Java.

Apple was weeks away from bankruptcy when it adopted OS X, and is now one of the most successful companies in the world, having rescaled from the ground up on OS X. The iPhone's iOS is perhaps the most financially successful bit of consumer software in history.


Microsoft didn't invent C++, but they started with C which wasn't niche anymore at the time. So you do have a point here.

However, you don't see big companies emerging that often anymore nowadays, so that could be a factor as well. And you don't see many startups relying on TIOBE either. Every place where programmer productivity is critical to the company's success is where you'll see niche languages used the most.

If startups would stop getting acquired as soon as they get profitable maybe we'd see big companies emerging while using niche languages.


Clojure ain't a niche language anymore. It is steadily growing. I quit my very well paid job and to my own surprise I found a Clojure job a day after. Anyone who complains that it's impossible to find a job with Clojure projects, I guess aren't really looking.


Erm... Netscape->Mozilla?


We're at like 2000 employees (probably 200 engineers) built primarily on Clojure. We've branched out to more and more languages as we've grown, but at our core we have multiple millions of lines of Clojure code.


I had a similar experience running a small team. It was very difficult to find people who knew or wanted to learn Clojure. I'm also experiencing the other side of the problem, in that I know the language but cannot find a job in which to use it.


So, why not clojure? (Warning: subjective, and a bit toungue in cheek):

- Clojure is a Lisp. Lisps are very elegant in their simple tree syntax, so compilers like them, and reasoning about Lisp code is a joy. However no one has yet figured out how to represent Lisp code in good a way that doesn't require a large number of parentheses, often stacked together in groups of three or four. If you have a slight astigmatism this (((()) will not help.

- It's a jvm language. That has gives all the benefits you want in a functional language such as type erasure meaning you can't make a data structure of primitive types without either pretending they are objects (boxing) or writing a custom type. Also, you have access to the vast amount of jvm libraries, almost all of which use mutable data structures and nulls everywhere. Jvm lacking proper tail calls will also give you the joy of having to manually specify when you want tail recursion rather than just recursion...from the tail.


    FunCall1( FunCall2( FunCall3(arg1, arg2, arg3)))
vs

   (FunCall1 (FunCall2 (FunCall3 arg1 arg2 arg3)))
Yes those useless parentheses certainly clutter up the code. Look at how much longer the lispy line is compared to the less lispy line. Why it's got a whole -3 more characters!

And the end of the line with all the parentheses. So much uglier to see three parens at the end of the lisp line than the non-lispy line.

Truly a monstrous imposition, that lisp syntax.

And when you look at any lisp code base, look at how they don't indent their code at all. So unlike other languages. Here's literally the first project I could find looking for "lisp source code": https://github.com/adamtornhill/LispForTheWeb/blob/master/we...


Which can be rewritten as:

    (-> (FunCall3 arg1 arg2 arg3)
        Funcall2
        Funcall1)
in Clojure. https://clojuredocs.org/clojure.core/-%3E


So, you're telling me that users of a language known for syntactic abstraction came up with a way to more conveniently express a common pattern of code?

Well color me flabbergasted.


Woah, what's up with all the bile?


The tone of this and my original post are the same. Sarcasm doesn't play well in text. Apologies.


To be fair, both the imperative and the lisp version of that are probably not optimal. In most imperative (OO) languages the first one could hopefully be

    Fun3().Fun2().Fun1()
or even

    thing3.thing2.thing1
If some thing is a property of the other

Concrete example from your linke: a projection and sort (I believe it is). I think that the tail of this expression with 5 parentheses makes this very hard to read (Admittedly it's a matter of experience of course):

    (docs (iter (db.sort *game-collection*  :all 
                                            :field "votes"
                                            :asc nil)))))
Questions I find hard to answer for the above line is: how many args does "iter" have? Is 5 trailing parentheses the right number?

Here is some half-functional half-imperative strawman syntax: for comparison:

   x = some_collection
          .select(c -> c.documents)
          .sort(d -> d["votes"], direction: ascending);
I find that a lot easier to balance, and know how many arguments the projection has compared to the sort.


> I think that the tail of this expression with 5 parentheses makes this very hard to read

> how many args does "iter" have?

Forget about parentheses and look at indentation. You need properly indented code, but that comes easily with a good editor:

    (foo (bar ...
              ...
              ...)
         ...
         (baz (+ x y)))
          
Both foo and bar takes 3 arguments, baz only one.

Likewise, "iter" has only one argument, because there is only one subtree at the indentation level where its arguments are expected to appear. If I try to add arguments inside the set of closing parentheses, emacs actually color them in red to signal that this is not easily readable.

http://dept-info.labri.u-bordeaux.fr/~idurand/enseignement/P...


> Questions I find hard to answer for the above line is: how many args does "iter" have?

It's obvious to me in a millisecond glance that docs and iter are called with one argument.

For one thing, the snippet does not contain a single instance of a ) parenthesis being followed by a ( parenthesis. Also, none of the ) parentheses have any material between them. It's just (sym (sym (sym stuff ...))).

Only three trailing parens are required to close (docs; the other two are for something else.

If iter had additional arguments, it might be written like this:

    (docs (iter (db.sort *game-collection*  :all 
                                            :field "votes"
                                            :asc nil)
                (another-iter-arg)))
It is still obvious that docs has one arg, iter has two and that the :all :field material is under db.sort.

https://news.ycombinator.com/item?id=9229339


Ah, the arg-count-from-indentation makes sense with the concrete example, thanks.

That said: if the code is neither correctly indented or has the parens correctly balanced - then one is in a pickle. It's easy to balance parens on correctly indented code, and it's trivial to indent correctly balanced code.


> That said: if the code is neither correctly indented or has the parens correctly balanced - then one is in a pickle.

But you can format the code automatically, and check if all is good by looking at the shape of your code (indentation). If the amount is the same but you put elements in a strange location, like:

    (defun bar ()
      (let ((a (foo)) (print (list a)))))
Simply pretty-printing it with PPRINT will do:

    (DEFUN BAR ()
      (LET ((A (FOO)) (PRINT (LIST A)))
        ))
(C-c RET, aka. slime-expand-1-inplace, will work too).

Indentation and parenthesis introduce a level of error checking by redundancy: the tools works on structured expressions, you look at indentation. Here above you can see that the body of the LET is empty, while there is a bogus binding in it.

The above example is in practice quite rare. Most errors are easily spotted with paren highlighting. I use paredit, which makes structural editing easy (but which is not too rigid and allows me to make unstructured changes too). You generally don't have unbalanced parenthesis, and if you don't spot the bad syntax with indentation, the compiler has a chance to complain, too.


> Fun3().Fun2().Fun1()

Hope you like Null Pointer Exceptions. (And poorly named methods.)


Neither the function names or whether a type systems has null or not depends on the paradigm (functional/OO/imperative/procedural/whatever).

I prefer type systems without null if given the choice, but for composition there isn't much different between result|error and result|null (If the null/error case is actually handled. Unless you use C you can usually handle it with something that resembles modads or is at least a pattern matching shorthand

C#

   var managerStreet = employee.Manager?.Address?.Street;
Which is very similar looking to how Rust (Which has no null) propagates errors:

   foo()?.bar()?.baz()?


Afaik using `?` for `Option` is part of the `Carrier` trait which is unstable and only in nightly Rust for the time being.


In practice you end up with less parens than in most languages to express the same thing though. Also, the parens are largely what allows for structural editing: https://cursive-ide.com/userguide/paredit.html

When I work with Clojure, I'm able to think in terms of blocks of logic as opposed to lines of code. This is extremely powerful.

The JVM is certainly not a limitation for many applications out there. Conversely, since Clojure has recur, the only time lack of TCO comes into play is when you're trying to do mutual recursion. I haven't run into a single case where I needed that in 6 years working with the language professionally.

However, Clojure is not really tied to the JVM. It runs on both Node.js and CLR. I'm actually currently working on a Node based micro-framework for Clojure: https://github.com/macchiato-framework/macchiato-core


> However no one has yet figured out how to represent Lisp code in good a way that doesn't require a large number of parentheses, often stacked together in groups of three or four.

Ah, not so! In some ancient Lisp dialects decades ago, there existed an invention known as the "super bracket". Namely the closing square bracket ] would simultaneously close any number of outstanding open parentheses up to the nearest opening [. For instance [foo (bar (1 2 3] could be written instead of (foo (bar (1 2 3))). It's not really such a great idea so it didn't catch on.


Thanks, at least it seems to indicate that someone other than me had an issue with telling N parens from N-1 parens in a row.

I think this is a pretty common syntax problem, it's for example why I prefer xml to json for human editable data despite the extra verbosity - each ending is explicitly labeled.

The Foo(Bar(Baz))) calling style is of course equally broken.


Looks like it was none other than InterLisp which had this, mentioned in P. Gabriel's and G. Steele's Evolution of Lisp paper:

https://www.dreamsongs.com/Files/HOPL2-Uncut.pdf P. 18.


> However no one has yet figured out how to represent Lisp code in good a way that doesn't require a large number of parentheses, often stacked together in groups of three or four.

Parinfer solved this problem for me: https://shaunlebron.github.io/parinfer/


> compilers like them

Editors too.


Although macros can make life difficult for clever editors that want to introspect code.


Don't macros actually help both compiler writers and end users here? Macros are expanded at macro-expansion time, which is a stage before code gets evaluated. Macro-expansion is akin to source code pre-processing in other languages. Therefore all necessary context for fully expanding a macro in Lisp is maximally a source file and minimally a single macro form within some shared compilation environment. Clever editors (like Emacs extensions commonly used by Scheme, Common Lisp, and Clojure programmers) make ample use of this functionality to provide all sorts of code introspection and other smart editing functionality.


Syntax level macros (like C) are unhygienic and basically dangerous, while proper (ast-transforming rather than syntax transforming) hygienic macros are ok for both tools and humans. Macros got a bad reputation just because there were poorly implemented for C.


SLIME is your friend. Stepwise forward and backward macro expansion and contraction. It's a thing of beauty.


This is mitigated by the fact that Clojure doesn't expose reader macros to the user.


I find -> and its brethren helps quite a lot.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: