Honestly, one of the things that bothers me the most is how Clojure enthusiasts get hand-wavey when people talk about performance. They will often "prove" its performance characteristics instead of providing solid real-world examples, and will often dismiss other peoples' benchmarks as having something to do with the quality of the Clojure code on which the benchmark is run.
Whoa. I didn't dismiss any benchmarks, and I'm happy to back up any of what I've said with real-life experiences.
For example, I was working on a system a few weeks back that involves loading several-hundred GB CSV files. I wrote it in a functional style, with several map and filter operations over each of the rows. I was able to program the logic viewing the entire CSV file as a single lazy sequence to transform.
After profiling I ended up using Java arrays instead of Clojure vectors as the primary data format for records in the system: given the fact that this app was IO-bound, this resulted in about 20% improvement for that particular use case.
However, because Clojure is capable of treating arrays as sequences polymorphically, the change required very few modifications to my code and I was able to maintain a functional style.
Also, here's a microbenchmark I just threw together in a REPL, since you ask: The operation is to create a map data structure, then, ten million times, add/assoc a key value. The key is the .toString() of a random integer between 1 and 100,000. The value is the integer. Therefore, the resulting map will end up with a size of 100k elements, and be modified 10m times.
Except for the map implementation used, the code is identical.
I ran each test six times, discard the first three (to allow the JVM to warm up) and took the average result time of the other three.
I know that, in the Java world, you guys probably don't really understand this fact, but 3x slower is not only bad, it's so bad as to be completely unworkable in the real world.