Hacker News new | past | comments | ask | show | jobs | submit login

> The underlying design of Clojure's data structures must be different. It needs to efficiently support functional updates; you don't want to fully copy a hash table or vector whenever you add a new entry.

I've actually been curious about this myself. No one seems to scream that functional languages are slow, so I'm curious what's going on under the hood. Like the author, I would presume they don't fully copy the hash table every time I add an entry.

Is Clojure smart enough to know that the previous hash map is now out of scope and can be GCed, so it just mutates the hash map in place under the hood? In the tiny amount of Clojure I've written, that seems possible. On the other hand, it makes performance much harder to reason about since mutating a variable that stays in scope after the mutation presumably would still trigger a copy. Do they just implement some kind of a "fall-through"? I.e. if I add a key to a hashmap, it creates a new empty hashmap, adds that value and a reference to the original hashmap. And then when I retrieve values, it searches the new empty hashmap, and then the "parent" hashmap if it isn't found?

I'm curious because a lot of functional programming seems like the kind of thing that would thrash memory in a GCed language. It seems to an outsider like you can't do anything without allocating memory, which you're often done using almost immediately. It would seem like "y = x + 1" would always take more time than "x = x + 1" because of the allocation.

Maybe GC is just a lot better than I think. Or maybe the functional style, with it's less complicated scoping, makes GC trivial enough that it offsets the additional allocations. Does anyone have any idea, or maybe have a link handy? I'm not a language designer, nor a Java programmer, so I fear the source code may not be terribly useful to me. I'm also a terrible Clojure programmer, if Clojure is written in Clojure these days (although I'd like to get better one day, it seems like a really fun language).




Yep -- under the hood, "immutability" in Clojure is implemented with data structures that provide pretty good performance by sharing sections of their immutable structure with each other.

For example, if you have a vector of 100 items, and you "mutate" that by adding an item (actually creating a new vector), the language doesn't allocate a new 101-length vector. Instead, we can take advantage of the assumption of immutability to "share structure" between both vectors, and just allocate a new vector with two items (the new item, and a link to the old vector.) The same kind of idea can be used to share structure in associative data structures like hash-maps.

I'm no expert on this, so my explanation is pretty anemic and probably somewhat wrong. If you're curious, the book "Purely Functional Data Structures" [0] covers these concepts in concrete detail.

[0]: https://www.amazon.com/Purely-Functional-Data-Structures-Oka...


Here's how the persistent vectors work (old article series, but good. hopefully someone will tell us if it's out of date): https://hypirion.com/musings/understanding-persistent-vector...


so I'm curious what's going on under the hood

https://en.wikipedia.org/wiki/Persistent_data_structure

Maybe GC is just a lot better than I think.

Yes, Clojure relies heavily on the JVM's GC being very good. It trashes it like there is no tomorrow and it would be a lot of work and extremely hard for an implementation of Clojure from scratch to match the performance of Clojure in the JVM because of how good the JVM's GC is.

Having said that, I have move 3 projects (10k-20k LoC) to JS from Clojure and don't plan creating new ones in Clojure, the JS projects ended up being faster, shorter and easier to understand. Idiomatic Clojure is very slow, as soon as you want to squeeze any little performance out it your code base will get ugly really fast. Learning Clojure is nice for the insights but I'll will pick nodejs first any day for new projects. Even if I need the JVM, my first choice probably will be Kotlin and then Clojure.

Of course there are many more downsides to using Clojure. No ecosystem, the cognitive overhead of doing interop with over-abstracted over-engineered Java libraries(because of no ecosystem ;)), the horrible startup times, the cultist community and the interop is really not that good, sometimes you have to write a Java wrapper over the Java lib to make it usable from Clojure. The benefits over JS are minimal but the overhead and downsides are too much. Worth learning it but not worth using it for real production projects.


Is there anything in particular that you had to give up to make the move (aside from thr jvm)? I am assuming that for node.js to be shorter and faster you avoided the bigger frameworks (next, typeorm etc) and had to stick pretty close to basic middleware ala express.


This is what bugs me about the Node.js cult - it's often compared with Rails, Django and Laravel which is just nonsense. Even Express isn't the same as plain Node. Plain Node.js is only comparable to something like Golang's net/http, Ruby's rack or raw imperative PHP with no classes. Once you add the kind of scaffolding included in Rails and Django Node.js slows down rapidly with significant memory bloat. Try Redwood - the latest Rails-in-Node and observe the memory footprint.


I’m intrigued by what kind of code you’re finding is shorter and faster in JS. I’ve had the opposite experience, even limiting Clojure to a single thread.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: