
Clojure – Fascinated, Disappointed, Astonished (2016) - systems
http://lewandowski.io/2016/01/clojure-summary/
======
dwohnitmok
Just so people don't get the wrong idea, AFAICT STM isn't used that much in
production Clojure (unlike say Haskell). This I chalk up to the lack of a
couple crucial operators in Clojure's STM. Instead the usual method for
dealing with concurrency is atoms. Note that once you get familiar with how to
use atoms to structure concurrency, this is a lesson that is widely
applicable. You can, for example, write essentially the same code in Java with
an AtomicReference (how atoms are implemented anyways) as long as you take
care to only put immutable data structures in the ref.

I would personally sum up Clojure's main impact on my own programming in two
ways.

1\. 90% of the time you just need immutable maps, sets, vectors, and
primitives. Fancy custom classes and custom higher order functions usually
aren't needed.

2\. Wishing more things had on-the-fly editing of a running program.

~~~
arohner
In my experience, STM isn't used because it's very rare to have situations
where you need STM but don't need durability. In most production situations,
you need to record state changes in a DB or an event log. In that case, the
source of truth is now the DB, and your in-memory values are immutable
snapshots.

The other reason is that Clojure code is usually well-factored, and most
clojurians have listened to Rich and simplified their programs. Not many
problems require STM transactions across multiple refs. If you don't have
multiple refs changing transactionally, you can use atoms instead for a
performance benefit.

------
bcrosby95
It's missing my favorite concurrency construct: immutable, persistent data
structures. I was initially interest in Clojure for things like STM, but found
most of my concurrency problems are amenable to immutable data. You can
certainly use this technique in other languages, but it tends to be higher
effort.

~~~
kubanczyk
No. Immutable structures still require locking.

If you are spending 100 ms calculating the new version of the immutable
structure, you want to make sure that the nearby thread isn't making their own
version. You don't want any of the two results silently overwritten. You want
the nearby thread be sequential - to wait for you and base on your result.

~~~
dpratt71
This is not how persistent data structures are used in practice. There is no
"the" new version of a persistent structure. The best analogy I can think of
is to compare it to a VCS (e.g. git). There is no need to lock any existing
commit in order to create a new commit (which together with prior commits,
represents a new version of the code).

~~~
majormajor
I'm not familiar with Clojure, but you can hit conflicts in the Git world,
though, which seem to be what the parent is concerned about. Two of us could
be creating some new data based on the last data we had, at time T, and then
the other person submits theirs at time T+5, and I submit mine at time T+10.
In that case, my change hasn't taken theirs into account.

~~~
escherize
If you can "submit" your change back to the original datastructure then the
original datastructure is not immutable, right? Here's a nice explaination
about how the persistant immutable datastructures work:
[https://hypirion.com/musings/understanding-persistent-
vector...](https://hypirion.com/musings/understanding-persistent-vector-pt-1)

~~~
setr
I believe the question is that, if two threads take the same immutable vector,
and both make a change to it independently, they'll end up with two new
vectors (eg two branches in git); a vector that reflects thread1's change, and
a second vector that reflects thread2's change. So now you have a conflict,
which requires resolution; git has a human intervene.

eg

x = [1,2,3]

Thread1 -> x + [4] => [1,2,3,4]

Thread2 -> x + [5] => [1,2,3,5]

But you were expecting [1,2,3,4,5]

Reality was that you wanted an order to your events, normally enforced by
locking, which the immutable vector doesn't seem to help you with; they were
both able to update independently, but you actually wanted them to update
dependently.

If you try to use immutable datastructures to avoid locking, then how is
conflict resolution handled?

I _think_ the answer would be that it doesn't help you avoid locking; either
you lock & share a single reference to the latest version of your immutable
vector, to enforce ordered events, or you define a resolution strategy
separately. The immutability aspect just stops you from not having a
resolution strategy -- which would always be incorrect

And if I understand correctly, the ideal scenario for immutable datastructures
in concurrent scenarios is when you can define such a merge strategy (and
safely give threads their own copy of the datastructure to muddle with,
without actually having to copy the entire datastructure)

~~~
dpratt71
You _could_ , as per your example, use locking as part of a resolution/merge
strategy to combine the results of two separate computations running on two
separate threads. Or you could use some strategy that does not involve
locking. Either way, it does not support the original claim I disputed that
"Immutable structures still require locking".

~~~
setr
>Either way, it does not support the original claim I disputed that "Immutable
structures still require locking".

It does, if you believe serialization by locking is the main strategy to
handle serialization (in which case, mutable or immutable, you still need to
lock), and so... you still need locking. Serialization being the main scenario
GP gave.

Your original answer didn't resolve the problem either -- fine, you didn't
need to lock when adding elements to your immutable structure, but you still
haven't reached serialization; you've just pushed the problem back another
step.

The answer that I believe GP would need to correct his understanding, (and
much more importantly, the answer that I'm interested in :-) is what
serialization strategies does immutable datastructures enable, if not locking?

The other correction GP seems to require is whether serialization is actually
that important in general, and whether functional programmers tend to
experience otherwise... But I don't care about that answer :-)

~~~
Scarbutt
_the answer that I 'm interested in :-) is what serialization strategies does
immutable datastructures enable, if not locking?_

Depending on your performance goals, Compare-and-Set with retries a la
clojure's atom reference construct.

[https://clojure.org/reference/atoms](https://clojure.org/reference/atoms)

------
nickik
A couple of things to point out for new Clojure readers.

The 'def' form should not be used inside of 'defn'. Local bindings should use
'let'.

When using the STM, 'ref-set' should not be used most of the time.

(def names (ref []))

(defn add-name [name] (dosync (alter names conj name)))

The 'alter' function will call 'conj' and the return value will be the new
value of 'names'.

~~~
bjoli
Why should local bindings use let? In scheme internal (define ...) is defined
in terms of letrec, which can add some overhead when letrec is unoptimized,
but for schemes with optimized letrec (guile 3.0, chez, racket...) (define
...) is just as fast as let or let* unless you start doing cyclic references.

The only real reason to not use Def would be if the bound symbol escapes the
scope, but that would be ridiculous for a modern language.

~~~
nickik
In Clojure 'def' creates a 'var' and those are global in the namespace.

> The only real reason to not use Def would be if the bound symbol escapes the
> scope, but that would be ridiculous for a modern language.

Its not 'ridiculous' to have a consistent definition of what something does.

------
fiddlerwoaroof
This post is pretty old and many of the things in the "disappointed" section
have been improved quite a bit.

~~~
vimota
Curious about this! Could you elaborate on which?

~~~
Plugawy
Almost nobody uses refs in production code ;-)

~~~
AlexCoventry
That was the case even before the blog post was written.

------
kotutku
> I have to admit, that at the beginning using brackets was not easy for me.
> Once I’ve realized that the brackets are just on the other side of the
> function name, everything was simple and I could code very fast.

A question to fellow Lispers: is there a good reason why LISPs do not put
parenthesis they way we know from math? Why `(f foo bar)` and not `f(foo
bar)`?

~~~
pdonis
_> is there a good reason why LISPs do not put parenthesis they way we know
from math? Why `(f foo bar)` and not `f(foo bar)`?_

Because the first is a list and the second isn't. The whole point of LISP is
to represent code as data, and the fundamental data structure it uses is a
list.

~~~
MadWombat
well... (1 2 3) is syntactic sugar for (cons 1 (cons 2 (cons 3 nil))). What
prevents someone from adding syntactic sugar f (x y) for (f x y)?

~~~
dunefox
The fact that one of these is a list of symbols and the other one isn't. They
mean different things.

~~~
MadWombat
> "They mean different things"

For fuck's sake, these things don't mean anything by themselves. The only
reason they mean different things is because the syntax is defined this way.
So back to the original question, why is the syntax defined this way?

~~~
gsk22
Your proposed syntax becomes ambiguous when you are more than one level deep.
What would be the meaning of an expression like f (g (x y)) ?

Is it (f (g x y)) or (f g (x y))?

~~~
MadWombat
Hmm... I see. (x y) is a list, g (x y) is a function call, f (...) is another
function call, so f (g (x y)) is (f (g x y)) in regular LISP syntax. The
syntax is parsed from innermost to outermost, so where is the ambiguity?

~~~
filoeleven
(x y) is also a function call, because lisp always interprets the first
position of a list as such unless you quote it:

    
    
      f (g ‘(x y))
    

But that changes the meaning of g from “a fn with two args” to “a fn with one
arg that is a list.”

If your meaning is instead that function-outside notation cannot be mixed with
s-exps, I.e. you have to choose one mode or the other, then I think you’re
right that there’s no ambiguity here (and no need to quote the args above). I
suspect you would lose the power of macros, and may not be able to use
existing ones, but I don’t write them so I’m not sure.

Below are some Clojure expressions with standard and then function-outside
versions. I quickly found that I wanted to go back to attached() parens() for
those, because that seems more readable to my eyes than having () them ()
spaced out. I’d never choose this option though, it took all of an afternoon
to get used to lisp syntax, and I find it to be more readable now than c-style
languages.

    
    
      (let [a 1 b 2] (+ a b))
      let ([a 1 b 2] + (a b))
      let( [a 1 b 2] +(a b))
    
      (apply + (range 10))
      apply(+ range(10))
    
      (defn my-fn [x] (* x x))
      defn (my-fn [x] *(x x))

~~~
MadWombat
Eh... you are still stuck with LISP. We are talking about an alternate syntax
where (x y) is not a function call.

------
Rapzid
Been a while since I touched Clojure but at the time nothing beat light
table's insta-REPL. Anything replace that, or any insta-REPL come close to
replicating that experience for any language?

~~~
ledgerdev
The integrated repl and inline results in lighttable were the best! Sad that
it was abandoned.

The best choice today for a similar experience is
Chlorine([https://atom.io/packages/chlorine](https://atom.io/packages/chlorine))
which is built on top of Atom. It also uses the built in clojure socket repl,
not the complex nrepl.

Calva does it's best to show some data inline, but unfortunately vs-code
doesn't support displaying inline results
[https://github.com/microsoft/vscode/issues/3220](https://github.com/microsoft/vscode/issues/3220)

~~~
BoiledCabbage
> Calva does it's best to show some data inline, but unfortunately vs-code
> doesn't support displaying inline results

I've got no attachment to the Calva project whatsoever, but the below link
pretty clearly contradicts what you're claiming.

[https://calva.readthedocs.io/en/latest/eval-
tips.html#evalua...](https://calva.readthedocs.io/en/latest/eval-
tips.html#evaluation-in-a-file-editor)

Not to mention I've done it.

~~~
ledgerdev
Sorry wasn't worded well, and in no way am claiming calva doesn't show inline
information. I have used calva extensively though not in the past year. It
should have read something like this:

> does it's best to __some inline information __... but doesn 't support
> displaying inline results nearly as well as lighttable or Chlorine.

So vscode/calva just adds text formatted only information information at the
end of a line. This works fine if the result is short like say "hello world".
But what happens if it's a large result, say a list of 100 items? In this case
it just prints out some portion of the result or perhaps even the whole
result? I can't even remember exactly. And if you throw exception, it's
printed out on one very, very long line of text (or was last time I used it)
at the end of the line. I can recall scrolling horizontally thousands and
thousands of columns and trying to interpret the long the already hard to
decipher clojure errors.

What Chlorine and LightTable do/did much better is show the large/long results
in a formatted block below the line, and with large data structures allow
formatted expansion, because atom isn't limited to only line based
decorations.

Here even is the author of calva asking for more than just simple :before and
:after line decorators in vscode.
[https://github.com/microsoft/vscode/issues/3220#issuecomment...](https://github.com/microsoft/vscode/issues/3220#issuecomment-430124153)

Has calva been able to better handle inline errors and large results better?
If so I need to try it out again.

~~~
Rapzid
It's an area that isn't "solved" IMHO. I love F# as well, but the typical fsi
select to execute doesn't come close. Light table provided the ultimate
scratch pad.

Scala sheets are also meh IMHO

------
capableweb
> I have to admit, that at the beginning using brackets was not easy for me.
> Once I’ve realized that the brackets are just on the other side of the
> function name, everything was simple and I could code very fast.

Yeah, having automated insertion of brackets/parenthesis is necessary to
adopt, otherwise you find yourself annoyed very quickly. Once you've adopted
to automatic insertion of that, it's hard to go back to language where you
don't have that...

~~~
Swizec
This is what drove me away from lisp.

If everyone says you have to use a plugin to auto-insert parentheses, that
means computers know where they go. _So why are we writing them??_

~~~
lukifer
I've never spent any serious time with Clojure/Lisp, and the nested parens
have been a pretty major impediment. I'm curious if anyone's ever experimented
with dialects that replace brackets with significant whitespace
(newline+tabs), similar Python? I could also see a drag-and-drop coding GUI
being pretty powerful.

I'm sure once one spends enough time with it, one starts to think in Lisp, and
the desire to solve the "problem" disappears. :)

~~~
tpfour
You never spent much time with it and the parens are a major impediment? If
you cannot overcome the "parens" (non) issue, I suppose the language is not
for you. It really is not an issue, _at all_.

I don't say this to be demeaning! I would say there are far more chances you
really enjoy the language than not. Give it a try and stop worrying about the
parenthesis and start thinking in terms of an AST.

    
    
        (let ((x 0)) (+ x 1))
    

If you want Python that's fine but there is definitely no problem with parens
in Lisp/Scheme :)

~~~
lukifer
Ha, just thinking aloud! More of my own psychological barrier than a knock on
Lisp. I do like it quite a lot, it's just completely alien compared to a life
of experience with C-like languages.

~~~
filoeleven
I found its alien nature to be kind of helpful in learning Clojure, actually.
The paren position change really does melt away quickly, and in the meantime
it was a good reminder that I was “not allowed” to do procedural code or
mutate variables; I had to use a different mindset. Most people who check out
Clojure have been coding for 10+ years, so you’re in good company as far as
ingrained habits go.

As others have said, the best way to check it out is to get a REPL running in
a decent IDE and play with the language. There’s something quite intuitive in
its workflow: knowing that you want to do _something_ , though you’re not
quite sure what it is yet, and typing a single ( gives you all the power and
space you need to figure it out no matter how deep inside a code block you
are.

------
the-alchemist
I came from a strict C++ -> Java/Python world. My college class on programming
languages included a Lisp, I think, but that was so long ago and was so poorly
taught that I graduated without ever hearing the word "REPL".

So grokking, let alone writing, Clojure code was daunting but extremely
worthwhile. All good things come to those who wait, and sure enough, I see it
now. Massive gain in productivity. Like, probably more than going from C++ to
Java, or Java to Python (for me).

P.S. This article is definitely a little dated in parts, but has of the warts
still stand.

