

Clojure 1.3 First Impression (It's Fast) - johnaspden
http://www.learningclojure.com/2010/09/clojure-13-first-impression.html

======
johnaspden
Last time I tried Scala, it blew Clojure out of the water for speed on the
program I translated. I assume that Java did too. I think that's probably not
true any more.

Anyway, pretty fractal tree program in lisp!

~~~
edifice
You're still going to find Scala to be significantly faster on arbitrary code.
It's great that there are ways to speed up key functions in Clojure now
though.

~~~
swannodette
Backed up by what evidence? And by arbitrary code do you mean code that
embraces mutability?

~~~
edifice
Scala effectively has these kinds of type declarations on every function. This
article itself proves the point. Take out the :static and type decls and see.

~~~
swannodette
Idiomatic Clojure tends to use Clojure's persistent data structures - you
don't often create custom types, all of the core functions on those data
structures are already pretty much as fast as possible. No need for ^:static
or type decls.

The article also doesn't get into deftype/defrecord/protocols. In those cases
you always get the fastest path of the platform. Again you can build things
that have the same perf of the core data structures w/o resorting to ^:static
or type decls.

So what exactly do you mean again by arbitrary code? Perhaps you meant numeric
code - there ^:static and type decls help plenty. Perhaps you mean Java
interop? Again sure.

Personally I think Clojure is really taking the _dynamic, generic, and fast_
thing to a whole new level, w/o type-hinting of any kind.

EDIT: This isn't too say Clojure performance can't continue to improve.
Scala's ability to use Java arrays of primitives in higher order operations is
something I really, really want to see (and the the above is a big chunk of
the work in that direction)

~~~
jules
It's strange that you mention Clojure's persistent data structures as an
advantage for performance. They have many advantages but performance isn't one
of them.

~~~
swannodette
I'm curious how you come to this conclusion? Programs in any language will
often be designed around immutability in order to improve code
maintainability. Without fast persistent data structures this will most
certainly decrease the performance of your otherwise elegant design due to
considerable amounts of copying.

~~~
lukev
If you're going for raw speed you should use mutable data structures. Sadly,
that's just the way it is - you can update a mutable data structure in a
couple of instructions by swapping out a pointer, while "updating" an
immutable/persistent object is inherently more work at the machine level.

Clojure is awesome because it provides immutable persistent structures that
are well onto the good side of "good enough", not because it's the fastest
thing possible. Java itself is far from the fastest thing possible, and the
speed of Java is itself a strict upper bound on Clojure's speed.

~~~
johnaspden
Being a sometime embedded man myself, I was nodding sagely at your comment
until this bit: "Java itself is far from the fastest thing possible, and the
speed of Java is itself a strict upper bound on Clojure's speed"

Clojure doesn't compile to Java, it compiles to JVM bytecode, and I think
(although I don't know) that that's supposed to be very fast.

In this post:

[http://www.learningclojure.com/2010/09/clojure-faster-
than-m...](http://www.learningclojure.com/2010/09/clojure-faster-than-machine-
code.html)

I compiled a (lookup into a map (potentially constructed at run time)) into a
series of nested if statements, which then ran very fast indeed.

I shouldn't think I've achieved anything like the holy grail here, it all goes
wrong at the end anyway, and I think that clojure 1.3 might actually have
broken the optimization techniques I used.

But I think there's a case for saying that _an_ immutable lisp on the JVM
might one day be the fastest language in the world, because you can do those
sorts of tricks in lisp in a way that you can't feasibly do in assembler, and
the JVM does optimization dependent on run-time behaviour, as well as on
static analysis.

I don't have the data. I'm not asserting this.

But I am asserting that Java speed is not a strict upper bound on JVM language
speed, (unless every JVM bytecode program is also a Java program, which I
can't imagine is the case!).

~~~
lukev
By the way, compiling a data structure to a lookup function is a very clever
trick, and fortunately something Clojure makes easy. It reminds me of how old-
school assembly programmers would blur the lines between code and data for
better performance. Only this is more structured and easier to manage.

There are a couple of drawbacks, though:

1\. It really only works for read-only data. Generating a new lookup-fn is a
pretty expensive operation. I don't think I could use this for data that
changed at all.

2\. Compiled code uses permgen space and isn't garbage collected as easily. If
you make extensive use of this technique, especially with large maps, it'd be
really easy to blow your permgen space.

------
whakojacko
As someone familiar with Java/Scala but not Clojure, what exactly do those
type hints do? Just prevent it from autoboxing the primitives? Avoid excessive
reflection? Can this be done for other types or just int/long/float/etc?

~~~
swannodette
type-hinting Java objects to avoid reflection has been possible for some time
now. What wasn't possible was for fns to take or return primitives w/o boxing.
^:static lets you do that. ^:static also allows the JVM to apply the most
aggressive optimizations to your code - ^:static tells the JVM the code won't
change - so _callers_ of ^:static fns won't get the latest version if you
redef them. The benefit being that with numeric code you'll often see an order
of magnitude performance jump.

So a general strategy is to write your code as you normally would - def'ing
redef'ing fns at will. Then when it works - declare the critical paths
^:static, and adding primitive type hints ^long, ^double if the code is
numeric.

~~~
_delirium
CMUCL, dealing with a similar infinitely-dynamic problem in Lisp, had a
similar approach but dealt with it on a block-of-code level instead of per-
function. You could declare a block to be compiled together, and any calls
between functions in the block would be treated as having a promise that you'd
never redef them without recompiling the whole block, so they could be
statically optimized with respect to each other.

Not sure if that's a better or worse approach overall. It probably depends on
the structure of your program. One nice feature was that the same function
could be static from some perspectives and not from others--- to functions in
the same compilation block it was static, but functions from outside that
block that called it treated it as non-static, and would immediately get the
new version if, say, that whole block were recompiled. That let you make the
core code optimized by sticking it in one big compilation block, while not
changing the normal dynamic Lisp semantics of the block when viewed from the
perspective of any outside function.

~~~
swannodette
Clojure supports the static from some perspectives and not from others
pattern, you just have to go through the var. So even if an fn foo is declared
^:static, you can call #'foo instead so that you always get the latest version
of the fn.

