
Why Clojure? (2010) - edem
http://thecleancoder.blogspot.com/2010/08/why-clojure.html
======
twoquestions
I tried and really wanted to like Clojure, but I hated the error messages. I
love how it's a lisp, I love how you write programs in it, I love the REPL, it
all just feels nice to me.... until I miskey something or make some other
error. Then the compiler seems to hate me personally.

"Here's a haystack where the error /might/ be, have fun finding the needle
dipshit." Maybe I'm just spoiled with Elm, Rust, and Elixir's error messages,
but the last time I tried Clojure (more than a year ago) I just hit a wall
when I tried to make a toy app, as the Clojure compiler seems to hate me even
more than C's compiler does.

Has this situation improved? If so, I'd love to take another stab at it.

~~~
jeremiep
I always use something like clj-stacktrace[1] to pretty print error messages
and I've never had any issues figuring out what the problem was.

Most of the time you're working from the REPL inside your editor anyways so
the iteration loop is much, much better than any other language's. Even
without clj-stacktrace I'd still prefer Clojure development to anything else.

Rebooting a program and losing state is always worse than error messages
taking a bit longer to understand at first.

[1]: [https://github.com/mmcgrana/clj-
stacktrace](https://github.com/mmcgrana/clj-stacktrace)

~~~
misterbowfinger
_Rebooting a program and losing state is always worse than error messages
taking a bit longer to understand at first._

Not sure if it's _always_ worse. I've worked in imperative & functional
languages, and the quality of error messages is key to solving bugs and
understanding a sufficiently complex system, regardless of what language it's
coded in.

~~~
jeremiep
Maybe not always, but its important enough to consider.

You often have much more information at runtime than at compile time. Sure I
can run the compiler and scan its errors but I feel its much more productive
to poke around the program as its running. Even when working in C++/whatnot I
learn more about programs by running them in the debugger than reading the
code (reading UE4's source code will be weeks of head scratching because its
so huge - 2 days of stepping into its boot process and you know and
_understand_ most of its architecture).

I'd argue that the hardest bugs to find are the ones your type-system is
completely helpless against. Few languages protect against null pointers and
every time you have a cast you're breaking out of the type system and its
guarantees. Dependent types aren't frequent either so you're now augmenting
the type-system with asserts and guards and whatnot.

I find that in C++/Java/C# complex systems are the norm because everything is
built out of mutable blocks and misconceptions about what makes programs fast.

Every project I did in Clojure was a fraction of the complexity it would've
had in imperative languages. You're reducing complexity _so much_ that the
"type-system is good for complex bugs" argument almost vanishes.

~~~
misterbowfinger
_... I learn more about programs by running them in the debugger than reading
the code_

Agreed - in Ruby, most of my development was driven by a REPL & debugger, and
tests. That's an example of a primarily imperative language, though it borrows
some ideas from Lisp. But there, again, understanding error messages was key.

------
rooundio
My "functional programming epiphany" came in a talk by Martin Odersky, who
remarked that imperative programming is like thinking in terms of time,
whereas functional programming is like thinking in terms of space. Don't think
about the sequence of how to achieve things, but the building blocks needed to
do so. That nailed it and made me a Scala convert ever since.

~~~
throwaway7645
Best example I heard was in an F# talk. The guy used a bar tending analogy:

FP => I'll have a Sazerac

Imperative => Excuse me sir, could you take some ice, add rye whiskey, add
bitters, add absinthe, shake, strain into a glass, and add a lemon garnish
before bringing it to me

~~~
watt
Well, isn't this nice - outsourcing the knowledge what makes Sazerac, and how
to make it to somebody else, and just declaring that you want it?

Would you mind actually making Sazerac in your FP "analogy" as well?

~~~
virmundi
I like Clojure, but stopped using it due to not having a good way to define
DTOs at a service level (I prefer noisy statically typed languages
apparently). Best guess.

    
    
      (-> {}
        ice
        (rye :2-fingers)
        bitters
        absinthe
        shake
        strain
        garnish)
    

Now I think that strain flipped the returned type from drink to a glass with
the drink.

All this shows is that OO and FP are duals [1]. I don't claim to get FP
perfectly, but my moment of zen was realizing this.

1 -
[http://wiki.c2.com/?ClosuresAndObjectsAreEquivalent](http://wiki.c2.com/?ClosuresAndObjectsAreEquivalent)

~~~
doublerebel
Thank you, I had not seen that c2 page. Many in the JavaScript community have
fervent arguments for/against closures/objects (which I do not share). The
educated debate in that link is a quality resource on the subject.

------
MarcusBrutus
I've used Clojure in a few cases and I totally adore the simplicity of its
Lisp syntax as opposed to the baroquesque abomination that is Scala. However,
lack of strong typing is sorely felt. I 've done a little playground-style
OCaml coding and the feeling you get with OCaml is that once your program
compiles, it most likely also runs correctly. Is a Lisp language with strong
typing for the Java ecosystem too much to ask? Apparently it is or else we
would have had it by now.

~~~
kazinator
> _I 've done a little playground-style OCaml coding and the feeling you get
> with OCaml is that once your program compiles, it most likely also runs
> correctly._

This is only a feeling. Without carefully testing the code to exercise its
cases, all you know is that they are properly typed.

For instance, suppose we write a complicated function (or group of functions)
which goes through a block of intermediate code (output of a compiler) and
assigns registers to all the temporaries, introducing memory spills in
situations where more variables are live than the available registers.

We might have a _feeling_ that because this code compiles, it must be free of
problems such as accidentally assigning the same register to two variables
which have overlapping lifetimes.

That feeling is poorly supported by reality; it is the "safyness" of static
typing.

Untested code is garbage. And thorough testing is difficult to impossible, so
there is a bit of garbage in almost all software, unfortunately.

Statically type checked code has all of its code paths effectively tested by
the compiler, but those tests have only the limited point of view of trying to
show that the program contains a trivial type mismatch, a close cousin of the
syntax error. These "tests" are not actually feeding values into the code and
trying to make it fail or behave incorrectly with respect to its
specification.

~~~
rkrzr
> Statically type checked code has all of its code paths effectively tested by
> the compiler

The type checker just ensures that a program is well-typed (i.e. free of type
errors).

The better the type system the more program properties can be encoded in it
and the more errors it can catch (up to the point of proving correctness of
your program).

But you are right that type checking alone is no substitute for testing. They
are orthogonal concepts and both should be employed to ensure that your
programs behave correctly.

One nice thing of statically typed languages is that they can automatically
generate some tests for you, like e.g. the QuickCheck library for Erlang and
Haskell.

~~~
jeremiep
Clojure has quickcheck as well:
[https://github.com/clojure/test.check](https://github.com/clojure/test.check)

~~~
rkrzr
Good point. Of course you have to specify the type of the values that you want
to test in the test itself then, so you have to specify some types one way or
the other.

~~~
jeremiep
You can feed those types from clojure.spec now :)

------
armitron
One issue with Clojure is that it comes with strong opinions (STM, immutable
data structures, JVM-Java ecosystem) and thus is not as paradigm-enclosing as
other Lisps (e.g. Common Lisp).

I'd much rather have SBCL's native code compiler, read/compiler macros,
conditions and restarts (that I end up using on pretty much every project) and
optionally use libraries for immutability and STM (if and when I need them),
than compromise from the get-go and use a language that reduces the set of
available options by forcing its specific worldview.

If I do need a strong focus on concurrency, I find Erlang (and also Elixir) a
much more coherent solution. The cognitive dissonance that comes from having
to interact with Java when using Clojure is very damaging and it can't be
abstracted away. Just look at Clojure stack traces.

So to end with, Clojure proponents should understand its limitations and
design choices. It was created because Hickey needed a language to solve
problems he was having in his specific arena (delivering concurrent
applications in the Java ecosystem with his consultancy) and not really to
improve on the Lisp state-of-the-art side of things. If you're a Java guy and
absolutely need to stay in that ecosystem, I guess you can go with it.
Otherwise, you end up giving away too many things. This has been validated in
practice, in the Common Lisp community. We get a lot of guys who are coming
_from_ Clojure, but it is rare for someone to move _to_ Clojure from CL.

~~~
throwaway7645
I really tried SBCL, but even knowing a little Lisp (own about 6 books I've
read most of), I couldn't get past the tooling. SLIME+emacs is powerful, but
the tutorials are awful and very lacking. Sadly, Clojure isn't much better
here. Racket is pretty good here, but I can't get past the fact that I'm
essentially playing with an educational product and not a real industrial
language.

~~~
junke
Some resources:

[https://common-lisp.net/project/slime/doc/html/](https://common-
lisp.net/project/slime/doc/html/)

[https://www.youtube.com/watch?v=_B_4vhsmRRI](https://www.youtube.com/watch?v=_B_4vhsmRRI)

[http://www.jonathanfischer.net/modern-common-lisp-on-
linux/](http://www.jonathanfischer.net/modern-common-lisp-on-linux/)

[http://trac.clozure.com/ccl/wiki/InstallingSlime](http://trac.clozure.com/ccl/wiki/InstallingSlime)

------
donjigweed
For me, Clojure is the most satisfying language to write in. The design of the
language feels impeccable most of the time. Working in the repl, with all of
your application code loaded into the runtime, sitting in an adjacent terminal
or editor tab, is immensely satisfying, much like the experience of a
craftsman working with hand tools. Unfortunately, I find Clojure code to
suffer in terms of readability, and I think it suffers from the same problem
all dynamic languages suffer from, they just don't scale to larger
applications and teams as well as static languages do. Proof of this is the
fact that pretty much every dynamic language out there eventually looks to
graft something like a static type system on after the fact. Basically I think
people advocating Clojure outside the context of small, high ability teams are
(mistakenly imho) prioritizing write mode over read mode.

~~~
iLemming
au contraire - many find Clojure code extremely readable (even compared to
other Lisps). It's possible to write some obscure functions in any language,
developing good taste and habits to write nice and reasonable code is just a
matter of practice. I believe Clojure gives you enough discipline to learn
that fast. It just seems there are certain types of devs who are inherently
dyslexic - they struggle to read Lisps. Where a Lisp developer sees structure
and beauty they see nothing but quivering beehive of parentheses.

------
wahern
They left out the part where Clojure DOES NOT implement tail-call optimization
for mutually recursive functions. Without TCO, you can't fully leverage
functional composition and immutable data structures without introducing some
ugly hacks or relying on other language-level constructs offered by the
implementation--that they can't be implemented in the language strongly
suggests something about the limitations of the language.[1]

A great language that does optimize mutually recursive tail-calls, among many
other elegant features: Lua!

... TCO, asymmetric stackful coroutines (90% as powerful as call/cc, but with
zero calories), lexical closures, prototype-based object orientation, duck
typing-based object orientation (among other possibilities), and a first-class
C API that allows C code to work cleanly with coroutines, closures, the object
system(s)....

Lua is truly multi-paradigm. The only downside is dynamic typing, though
that's not always a liability and often an asset. Also, Lua has other
interesting features, like lexical global namespace substitution using _ENV,
that permit devising solutions to minimize some of the headaches of dynamic
typing. And because Lua is so powerful in so many dimensions, yet so simple
and tiny, writing unit and regression tests is often a breeze.

Once you add the canonical extension module LPeg into the mix (written by one
of the Lua co-maintainers), Lua is about as formidable as they come. Only Perl
6 comes close to the ease of writing parsers with Lua+LPeg.

[1] The JVM will eventually add support for implementing TCO in Clojure and
other languages, at which point many Clojure proponents will find religion in
the power of TCO.

~~~
retrogradeorbit
Clojure does actually support TCO, it doesn't support _automatic_ TCO because
of the JVM limitations. And after using explicit TCO calls in clojure for a
while I actually prefer it now over automatic TCO. Because I can clearly see
from reading my code where tail calls are optimized, and where they build up
stack. I don't have to try and work out if the recursive call is in the tail
position. It's all explicit.

For mutually recursive TCO there's `trampoline`

[https://clojuredocs.org/clojure.core/trampoline](https://clojuredocs.org/clojure.core/trampoline)

~~~
nemoniac
Agreed that explicit is better than implicit but I can't imagine when you
might want tail calls to build up stack. And if you do have a use case for it
in mind, wouldn't you want to be explicit about that and have TCO be the
default?

~~~
retrogradeorbit
Well that (default being TCO, and explicitly declaring a stack call) would
mean every normal function call (the vast majority) would need to be
explicitly declared (call func args...), and the implicit case would need to
blow up everywhere it's invoked in a non-tail position.

What I was getting at is: in an automatic TCO language you don't actually know
if the call is optimised or not. You might think it's optimised, but it isn't,
because it's not in the tail position. You don't find this out until one day
in production the stack explodes. The only way to know if it's optimised is to
determine if it is in the tail position. That sometimes is straight forwards,
but sometimes requires careful thought. It is certainly not always obvious.

Additionally, another coder can come along and break your tail position.
You've gone "return func()" and it's TCO and then later someone changes it to
"return 1 + func()" and now the call to func is not TCO, but there is no
warning and no obvious outward signs that this has changed.

In the explicit TCO of clojure, if the second programmer adds the extension
the compiler will explode with something like "recur not in tail position",
immediately informing you that this has happened, and then you may either
refactor to put it back in the tail position, or change it to a full stack
function call once you determine that building up stack is ok in this case.

Now I think loop/recur was implemented in Clojure as a work around the JVM
limitations, but in doing so I think Hickey stumbled onto a really great new
syntactic formulation for TCO. Explicitly declared TCO. I like it.

~~~
paulddraper
This is why I like Scala's approach so much.

It will TCO automatically. If you want to make sure you haven't made a
mistake, simply add the @tailrec annotation, and the compiler will give you an
error if it can't TCO.

~~~
retrogradeorbit
That sounds pretty good! I will check that out.

------
vesak
I think my language of focus in 2017 will be Racket. It seems to be the
perfect combination of optional (but somewhat static!) strong typing, highly
advanced features and cool Lispness without becoming impractical.

Minilanguages... for everybody!

~~~
throwaway7645
I have a comment on Racket below. I like the concept, but it seems to be
pretty slow. I'm also not sure how many mini-languages are out there. What the
"Red" language team is doing seems to be a similar concept. That makes sense
as Racket is a Lisp and Red is mostly from Rebol, but has a native code
compiler and comes with a tiny .exe. Their GUI engine is phenomenal.

~~~
capnrefsmmat
On Scheme benchmarks, Racket does fairly competitively, probably due to its
JIT:

[http://ecraven.github.io/r7rs-
benchmarks/benchmark.html](http://ecraven.github.io/r7rs-
benchmarks/benchmark.html)

(Chez is still freakishly fast.)

However, if you're writing code in DrRacket, it has full debugging information
enabled, which may slow the code down. So your REPL performance in DrRacket
may be slower than if you ran Racket from the command line.

[https://docs.racket-lang.org/guide/performance.html](https://docs.racket-
lang.org/guide/performance.html)

~~~
royallthefourth
The debugging slows the code down a ton in my experience, and it also prevents
futures from running in parallel. Trying out the futures code from the guide,
failing to disable debugging slows down the final Mandelbrot example by
something like 100x on my machine.

Fortunately, DrRacket's friendly GUI makes it pretty easy to toggle debugging.
I leave it in place most of the time, but occasionally the slowdown is just
unbearable.

~~~
throwaway7645
Thanks for the tips! I'd like to use ChezScheme, but haven't found a good way
to get into it.

------
gigatexal
There should be a rule that if one writes a post saying language X is faster
than languages A,B,C there should be some real world benchmark to add data to
the discussion.

~~~
WillPostForFood
That a fair request, but the article doesn't say Clojure is faster than any
other language, it just says it is fast. That caught my attention as well,
because I don't think of it as being particularly fast (mostly due to startup
time).

~~~
sgift
> because I don't think of it as being particularly fast (mostly due to
> startup time).

Proposal: Fast for server use-cases?

Curiously, I was never really interested in the startup time of most of my
code, probably because most of my code is long running where it doesn't matter
if it takes a few seconds more to start.

~~~
edem
I was not interested in that either until I had to start Java processes by
hand. The initial version ran for 5 minutes. I then refactored the app to be a
long running service instead of a fire-and-forget app and the run time changed
from 300 seconds to 300 microseconds!

~~~
sgift
Five minutes? Ouch .. when I think of "slow" start of Java processes it is
more in the range of 5 to 10 seconds. Maybe a minute tops for an application
server, but five minutes sounds ugly. Especially when it is all just start
time.

~~~
edem
Startup of a single JVM was around 10secs but I had to start up a JVM for each
operation (think of it like a JUnit test suite from the command line). If you
have several of these processes the whole thing is no longer O(1) but O(n) and
you begin to see the JVM startup overhead.

------
bschwindHN
Being able to target servers, web applications, and mobile apps is compelling
enough for me (at least for personal projects). Throw in the simplicity of the
syntax and the low dev iteration times and it makes me have to ask "Why not
Clojure?" when starting something new.

~~~
rubber_duck
>and mobile apps

Last time I tried (which was probably over 2 years ago by now) the CLJ android
had a huge loading time to the point of not being usable for anything serious
(it offered no tangible benefits that would justify the load time it had) and
iOS was something based on that JVM for Android port that Xamarin bought then
shut down. Has that changed ? Or are you talking about CLJS + ReactNative and
similar tech ?

~~~
bschwindHN
Yeah I'm talking about react native. I'm on mobile at the moment but look up
"re-natal", it's built on top of react native with re-frame, it's superb.

Edit: [https://github.com/Day8/re-frame](https://github.com/Day8/re-frame)

[https://github.com/drapanjanas/re-natal](https://github.com/drapanjanas/re-
natal)

------
sandGorgon
The jvm platform has become (again) the most exciting platform to work with in
recent times. And my favorite there is Kotlin.

kotlin 1.1 (still in milestone) is a brilliant and compelling language to use
on the JVM.

Spring 5 is a very "functional" web framework that will come out in a few
months with first class kotlin support (even today, it is fairly excellent
[1])

Vert.x [2] has incredible support for Kotlin and is coming up with built in
kotlin support. Coroutines got merged fairly recently [3]

Reactor ([https://projectreactor.io/](https://projectreactor.io/)) and Rxjava
have had kotlin support for a long time.

The tooling is excellent (umm.. it was developed by Jetbrains).

The killer app/functionality ? Android. Kotlin is the swift of Android and
that's where its uptake is coming from.

[1] [https://github.com/sdeleuze/spring-boot-kotlin-
demo/tree/all...](https://github.com/sdeleuze/spring-boot-kotlin-
demo/tree/all-open)

[2] [http://vertx.io/whos_using/](http://vertx.io/whos_using/)

[3] [https://blog.jetbrains.com/kotlin/2016/07/first-glimpse-
of-k...](https://blog.jetbrains.com/kotlin/2016/07/first-glimpse-of-
kotlin-1-1-coroutines-type-aliases-and-more/).

------
grzm
Full title: Why Clojure is better than C, Python,Ruby and java and why should
you care

I recommend updating the title to "Why Clojure is better than C, Python, Ruby,
and Java"

~~~
Nekorosu
The original title is more of a clickbait. The article highlights good things
about Clojure but it doesn't tell anything about the languages mentioned in
the title.

The current one is much better.

~~~
grzm
It also agrees with what the original post was entitled.

------
vonnik
Fwiw, here's a deep learning framework for the JVM that some Clojure people
are using: [https://deeplearning4j.org/](https://deeplearning4j.org/) I bring
it up not just because LISP was conceived as a language for AI, but because of
the points about distributed computing made in the post. Multithreading is
baked into the JVM, which is important because advances in AI are
computationally intensive.

~~~
aliakhtar
In what ways is Lisp / Clojure better for AI than say, Scala?

------
dominotw
We had a majority of codebase in clojure back in 2011, but we found that that
presented us with major roadblocks hiring new team members. Even some
experienced people had hard time getting into and learning curve was non-
trivial. So we dropped it. Prbly not a good idea to build a big company with
not so popular prog language [1].

1\. [http://www.tiobe.com/tiobe-index/](http://www.tiobe.com/tiobe-index/)

~~~
yogthos
My team has been using Clojure for the past 6 years, and we have the opposite
experience. Most new developers we've hired haven't used Clojure before, but
they were able to become productive doing useful work within weeks. The fact
that we're using Clojure tends to be a positive factor for attracting
candidates as well. A number of candidates we've interviewed mentioned that
Clojure was the reason they applied for the job. They heard about the language
and they were interested in trying to work with it professionally. In
addition, we hired contractors and co-op students to work with our team, and
they were able to pickup Clojure as well.

At this point I would actually use Clojure as a filter. I'm rather suspicious
of any developer who wouldn't be able to learn Clojure.

~~~
Jach
How do you convince management to treat getting up to speed with a programming
language (and your group's particular usage of it) as just another cost of
onboarding? Or even better, how do you make the case to whoever is in charge
of hiring that they shouldn't bias prior experience to the company's language
over newcomers too highly when adequate proficiency is only a matter of a few
weeks. All else being equal I think most hiring managers would naturally take
the dev who already knows the language, but if it doesn't take long to learn
the language up to the company's standards then it's not all else equal, all
is effectively equal and another metric is needed.

~~~
yogthos
I guess I'm just lucky to work at a place where technical people make
technical decisions. The management does not decide what languages are used or
how developers are onboarded where I work.

------
alkonaut
So, why not clojure? (Warning: subjective, and a bit toungue in cheek):

\- Clojure is a Lisp. Lisps are very elegant in their simple tree syntax, so
compilers like them, and reasoning about Lisp code is a joy. However no one
has yet figured out how to represent Lisp code in good a way that doesn't
require a large number of parentheses, often stacked together in groups of
three or four. If you have a slight astigmatism this (((()) will not help.

\- It's a jvm language. That has gives all the benefits you want in a
functional language such as type erasure meaning you can't make a data
structure of primitive types without either pretending they are objects
(boxing) or writing a custom type. Also, you have access to the vast amount of
jvm libraries, almost all of which use mutable data structures and nulls
everywhere. Jvm lacking proper tail calls will also give you the joy of having
to manually specify when you want tail recursion rather than just
recursion...from the tail.

~~~
greggyb

        FunCall1( FunCall2( FunCall3(arg1, arg2, arg3)))
    

vs

    
    
       (FunCall1 (FunCall2 (FunCall3 arg1 arg2 arg3)))
    

Yes those useless parentheses certainly clutter up the code. Look at how much
longer the lispy line is compared to the less lispy line. Why it's got a whole
-3 more characters!

And the end of the line with all the parentheses. So much uglier to see three
parens at the end of the lisp line than the non-lispy line.

Truly a monstrous imposition, that lisp syntax.

And when you look at any lisp code base, look at how they don't indent their
code at all. So unlike other languages. Here's literally the first project I
could find looking for "lisp source code":
[https://github.com/adamtornhill/LispForTheWeb/blob/master/we...](https://github.com/adamtornhill/LispForTheWeb/blob/master/web_with_persistent_backend.lisp)

~~~
alkonaut
To be fair, both the imperative and the lisp version of that are probably not
optimal. In most imperative (OO) languages the first one could hopefully be

    
    
        Fun3().Fun2().Fun1()
    

or even

    
    
        thing3.thing2.thing1
    

If some thing is a property of the other

Concrete example from your linke: a projection and sort (I believe it is). I
think that the tail of this expression with 5 parentheses makes this very hard
to read (Admittedly it's a matter of experience of course):

    
    
        (docs (iter (db.sort *game-collection*  :all 
                                                :field "votes"
                                                :asc nil)))))
    

Questions I find hard to answer for the above line is: how many args does
"iter" have? Is 5 trailing parentheses the right number?

Here is some half-functional half-imperative strawman syntax: for comparison:

    
    
       x = some_collection
              .select(c -> c.documents)
              .sort(d -> d["votes"], direction: ascending);
    

I find that a lot easier to balance, and know how many arguments the
projection has compared to the sort.

~~~
kazinator
> _Questions I find hard to answer for the above line is: how many args does
> "iter" have?_

It's obvious to me in a millisecond glance that docs and iter are called with
one argument.

For one thing, the snippet does not contain a single instance of a )
parenthesis being followed by a ( parenthesis. Also, none of the ) parentheses
have any material between them. It's just (sym (sym (sym stuff ...))).

Only three trailing parens are required to close (docs; the other two are for
something else.

If iter had additional arguments, it might be written like this:

    
    
        (docs (iter (db.sort *game-collection*  :all 
                                                :field "votes"
                                                :asc nil)
                    (another-iter-arg)))
    

It is still obvious that docs has one arg, iter has two and that the :all
:field material is under db.sort.

[https://news.ycombinator.com/item?id=9229339](https://news.ycombinator.com/item?id=9229339)

~~~
alkonaut
Ah, the arg-count-from-indentation makes sense with the concrete example,
thanks.

That said: if the code is neither correctly indented _or_ has the parens
correctly balanced - then one is in a pickle. It's easy to balance parens on
correctly indented code, and it's trivial to indent correctly balanced code.

~~~
junke
> That said: if the code is neither correctly indented or has the parens
> correctly balanced - then one is in a pickle.

But you can format the code automatically, and check if all is good by looking
at the shape of your code (indentation). If the amount is the same but you put
elements in a strange location, like:

    
    
        (defun bar ()
          (let ((a (foo)) (print (list a)))))
    

Simply pretty-printing it with PPRINT will do:

    
    
        (DEFUN BAR ()
          (LET ((A (FOO)) (PRINT (LIST A)))
            ))
    

(C-c RET, aka. slime-expand-1-inplace, will work too).

Indentation and parenthesis introduce a level of error checking by redundancy:
the tools works on structured expressions, you look at indentation. Here above
you can see that the body of the LET is empty, while there is a bogus binding
in it.

The above example is in practice quite rare. Most errors are easily spotted
with paren highlighting. I use paredit, which makes structural editing easy
(but which is not too rigid and allows me to make unstructured changes too).
You generally don't have unbalanced parenthesis, and if you don't spot the bad
syntax with indentation, the compiler has a chance to complain, too.

------
nodivbyzero
Is Clojure better than Erlang?

~~~
retrogradeorbit
I use them both. They are very different beasts. I wouldn't say one is better
than the other. They have different trade offs.

In Clojure I miss the preemptive VM support for the Erlang "processes", the
pattern matching and the elegance of the pid mailboxes.

In Erlang I miss the fantastic built in data structures, the platform reach
(targeting the browser for example), the syntax (refactoring clojure is a
dream), macros, the REPL, the tooling, the time constructs (refs, agents,
atoms) and a bunch off more esoteric stuff. (I don't miss core.async cause
Erlangs processes are much better).

I tend to use Clojure by default and Erlang when the problem particularly
suits it.

~~~
joncampbelldev
As an aside, you can get pattern matching in clojure as a library[0], though
I'm not familiar with erlang so I can't say if clojure's implementation is a
complete match in terms of functionality.

[0]
[https://github.com/clojure/core.match](https://github.com/clojure/core.match)

------
fleetfox
I got book about clojure and was really trying to get into it. Some of the
concepts really resonated with me but i can't get over the lisp syntax.

~~~
kimi
I also hated the syntax when I got started. I found it very hard to read
Clojure code. After a while you use it, you will not "see" it anymore (and
instead you will appreciate its clarity). Don't give up!

~~~
Slackwise
It also helps to turn parens grey via syntax highlighting so they are still
visible, but don't clutter all the wonderfully wordy symbols.

I've also started turning all the special character noise down in all
languages now. I'm usually trying to read the words, not all the cruft.

------
hotBacteria
Just to add a bit of context, this article was written more than six years
ago: [http://thecleancoder.blogspot.fr/2010/08/why-
clojure.html](http://thecleancoder.blogspot.fr/2010/08/why-clojure.html)

~~~
grzm
Thanks! I knew I had read a blog post with this title somewhere, but I
couldn't put my finger on it. Here's the related HN post from 6 years ago:
[https://news.ycombinator.com/item?id=1615182](https://news.ycombinator.com/item?id=1615182)

~~~
haylem
Same here: I was reading and when I read the anecdote about a some basic
concept being introduced only around page 216, I thought "hang on, I've read
that before..."

And I had the same thought as back then, too: "Concepts, Techniques and Models
of Computer Programming" _also_ used this approach, except it's only after a
few chapters that they teach you "oh by the way, Oz also supports for and
while loops" whereas everything until now used recursion. All my eng school
buddies found that off-putting, but I always thought that if you put the book
in the hands of a completely newcomer to programmer, they would never even
think of it.

(I know SICP predates CTM by decades and may not be considered in the same
league, and that Mozart/Oz is rather more obscure that any lisp/scheme ever
can be, but I consider it a fairly important book as well, and one of the best
textbooks I've ever read: very well structured, well written, very complete,
starts shallow but goes very deep and very wiiiiidddeee in terms of
knowledge.)

------
jakebasile
Bit of an aside, but this is the first I've seen an article published on
telegra.ph, Telegram's Medium-like article hosting service.

See: [https://telegram.org/blog/instant-
view](https://telegram.org/blog/instant-view)

~~~
4lejandrito
[https://news.ycombinator.com/from?site=telegra.ph](https://news.ycombinator.com/from?site=telegra.ph)

