Hacker News new | comments | show | ask | jobs | submit login

Sticking to traditional computer science (i.e., ignoring constants, caching, and other details, and just sticking to Big-O analysis), it is easy to show, or indeed easy to simply see that any mutable algorithm can be represented in a purely functional way by creating a balanced tree that represents memory and can be written to and read from by index. All reading and writing operations that treat the tree as simply an expanse of RAM are O(log n), vs. O(1) of direct RAM access. Therefore, pure functional programs can simulate any impure algorithm with a slowdown no worse than a log n factor.

Again, let me emphasize this is a worst-case analysis. Many common algorithms are equivalent regardless, and that's part of the reason tel specifically names Union/Find as such an algorithm, as it's actually somewhat rare to encounter one where there isn't a Big-O equivalent algorithm that is pure. Pure functional algorithms often require some modestly clever amortization to get there, but that's perfectly valid both in theory and in practice (many beloved "imperative" structures have amortized complexities, too, including such basics as a hash table).

In practice pure functional can sometimes come with larger constants; whether they hit you in practice depends on your use case and, often, sensitivity to garbage collection pauses.

In light of all that, I think it would be fair to say that your question is somewhat ill-defined. You can really only compare particular algorithms against each other, because there's no trivial equivalence between "imperative" and "pure functional" algorithms. Plus, the barrier between the two in practice is quite fungible... especially in a garbage-collected imperative language nothing stops you from using a "functional" algorithm, and every practical "functional" language will give you a way of running "imperative" algorithms directly. (Yes, even Haskell. But tel is building up to that. I won't give it away yet. Stay tuned.)




My question may have been ill defined, but your answer was awesome. :)

I have to confess your "easy to show/see" isn't immediately obvious to me yet. However, that is as likely because I haven't tried hard to see as anything else. This post is between other things I'm trying to get done. Poorly already, digging into this is something I fear would not help.

I am interested, though, so more pointers and explanations would be greatly appreciated.


Regarding the "easy to see":

1) Take your algorithm that involves updates to memory.

2) Split it up so that "memory" is represented by an ADT - in the original imperative setting, it's logically a hash table - O(1) read and write.

3) Replace that ADT with a binary tree. Now you have O(lg(n)) read and write.

4) If the previous algorithm was O(f(n)), it can't do any particular thing more than O(f(n)) times, including memory access. So in the worst case, you've made O(f(n)) things take O(lg(m)) times as long, so the new algorithm must be in O(f(n) * lg(m)).


Ah, I think I see what you mean. Seems the constants being ignored could be massive. Are there any comparisons saying how things compare in practice? (Similar to how heapsort is typically not used, even though it has among the better Big O values, right?)


Right, this particular approach is most interesting as an easy upper bound.

Constants are always highly situation dependent. If you are replacing a single memory lookup with a tree traversal, that's going to be a huge difference. If, for some reason, access to your mutable variables is already an expensive operation, it might not make much difference at all. If you need to take periodic snapshots of your world state, the mutation-free version might come out way ahead sharing portions of the tree that don't need to be copied.


I'm not sure how the periodic snapshot would "come out ahead" with the mutation free version. Seems the best you could claim is it wouldn't be as far behind as one might think. Unless periodic equals every change. In which case I would expect they could be equal. (That is, the extra work required to make it "mutation free" is extra work. Unless all of the extra work is required, it is hard to see how that version would "come out ahead.")

And this is why I particularly asked about the DLX algorithm. It is specifically made for rapid backtracking. Reading briefly [1] shows that it was even made parallel to speed it up. ("made it parallel" is a gross simplification of course.) Is a very interesting read on methods to make a heavily mutation based algorithm parallel.

[1] http://did.mat.uni-bayreuth.de/wassermann/allsolutions.ps.gz


DLX might be a fun algorithm to explore too. Union-Find is one I was more familiar with, but I think you're correct that DLX cannot have a reasonable, direct "pure" translation.


'I'm not sure how the periodic snapshot would "come out ahead" with the mutation free version.'

A snapshot of freely mutated memory is a O(N) copy.

A snapshot of a immutable tree is a single O(1) pointer copy (you just need to save the root).

Doing a full copy every change would be tremendously costly (substantially more than the penalty for walking the tree on that change, and probably overwhelming the overhead ofr walking the tree on reads).

Doing a full copy every hojillion steps would of course amortize to cheap (and probably the overhead from walking a tree for reads and writes would overwhelm it).

Anything real will of course fall somewhere between. As I said, constants are tremendously context dependent.

Note that this (of course) doesn't speed up the mutation free version - but if you have the constraint of wanting regular (or otherwise cheap) snapshots then using the mutation-free version can be the cheapest way of doing that.

I don't know the details of the DLX algorithm, or adaptations to it, well enough to say much about it in particular off the top of my head. I'd love to dig into it at some point, but I've unfortunately got higher priorities presently.


This is assuming a very naive snapshot of a mutated memory block. I would assume that if you were doing something that needed snapshots of each instant of the program, you would come up with a much more sane algorithm for getting that done.

It would probably use many of the same tricks as the immutable structures. Which is why I would assume they would be equal. (That is, I realize that immutable structures don't do a full copy on every "change." Depending on the structure and the change, they don't even do a copy at all.)

Consider, we basically just described how git works, no?


Well, certainly nothing stops you from using the immutable version and calling it mutable (it just happens to do no mutation!). But my point is that it's a nontrivial modification compared to other approaches to enabling snapshotting, and it's a good tool to have in your belt for that kind of situation.

Particularly interesting, genuinely "persistent" data structures (as used by Okasaki in Purely Functional Data Structures - which is a fantastic read and tons of fun) can give amortized worst-case bounds even in the presence of a malicious consumer of the data structure picking which operations to run against which historical versions of the object.


I meant more of the same strategies. Data sharing and the like. In a mutable language this can be done by simply updating the head pointer easily enough. In a non-mutable language this is tougher in some respects. This is pretty much the thing that tripped up a ton of folks back from the early days of java. "Why doesn't x.replaceAll('a', 'b') change the value of x?" was not an uncommon mistake to encounter.

DLX is actually a great example of this sort of thing, as the whole point is that the operation to remove an item from a doubly linked list can be reversed to add the item back just from looking at the item that was removed.

And again, consider the way that git works. Nobody would call c and the techniques they use immutable, but the underlying data structure is.

More directly put, I am not calling for all data structures in a classical "mutation based program" to be mutable. I am curious about some of the more famous mutation based algorithms and if there are good comparisons to the analogous versions.

There was a great post here a few weeks ago about the blind spot in functional programmers in building trees. Having just seen the "threaded trees" for what I think is my first time, I have to confess it took me longer to make than I would have thought. Mainly because I was trying to hold on to some of the "immutable" methods of programming.


"I meant more of the same strategies. Data sharing and the like."

Certainly it is possible to find alternative constructions that work. Occasionally these may still be faster. However, I strongly contest that it's "tougher" in a non-mutable context. Specifically:

'In a mutable language this can be done by simply updating the head pointer easily enough. In a non-mutable language this is tougher in some respects.'

This is wrong. The hard part about this is making sure old things pointed at by existing snapshots don't change. If your data is immutable, you get that for free.

Moreover, in terms of complexity of the system, (mutating algorithm + a bunch of stuff to capture the mutations) is likely to be messier than the nonmutating algorithm (which is sometimes cleaner than the mutating version to begin with, but certainly not always).

Also, note that you've moved to talking about "mutable languages", we had been talking about algorithms.

'This is pretty much the thing that tripped up a ton of folks back from the early days of java. "Why doesn't x.replaceAll('a', 'b') change the value of x?" was not an uncommon mistake to encounter.'

Which is clearly a problem with "non-mutable languages"? The problem there is that the Java paradigm had been strongly mutation oriented and then they dropped an incongruous mutation-free "method that is really more of a function" in there. Clarity and consistency are important in any setting.

"DLX is actually a great example of this sort of thing, as the whole point is that the operation to remove an item from a doubly linked list can be reversed to add the item back just from looking at the item that was removed."

But you can't do that if you might be sharing those lists with someone else. The point is that the constraints imposed by immutability are often the most effective means of addressing other constraints, and so study of these things is quite valuable. This thread has never been "Haskell is much better because it doesn't let you mutate anything!" - both because I've already acknowledged that in many settings the mutable versions of algorithms are preferable and because Haskell does let you mutate things (you just have to be explicit about what).

"And again, consider the way that git works. Nobody would call c and the techniques they use immutable, but the underlying data structure is."

As an aside, you can write C with very little mutation going on, if that's what you want to do. I've not looked at the git source, so I have no idea the degree to which they do.

As I said, though, that's an aside - my main point here is that immutable data, and algorithms working with it, are valuable and there are situations where they are the best solution even where mutation is "allowed" and even where a mutation-heavy version might be preferred in a slightly different setting.

"There was a great post here a few weeks ago about the blind spot in functional programmers in building trees."

A cursory search isn't turning this up - it sounds interesting. Do you have the link?


I think we are still talking past each other. So, first the link you asked for. https://news.ycombinator.com/item?id=7928653 If I am misrepresenting it, apologies. And let me know. :)

I think how I'm talking past you is I am perfectly fine with mutation based algorithms using immutable data structures. That is, union find can easily be done using a standard Scala immutable Vector for the array. Only caveat is each "mutation" has to be of the form "x = x.setValue(index, value)".

So, the question I give you is do you consider that a mutation based algorithm or not? I would, as the heart of the algorithm is still based on the updates of the array. You are just safe in knowing that any place you have let a reference to it leak out is never going to change. This is both good and bad, of course. Depending on what you are doing.

Stated differently, I don't think we have been distinguishing between mutation based algorithms that use immutable data structures with mutation based data structures. (That is to say, I have not been concerning myself with that.) So, if you consider it an immutable algorithm as soon as an immutable data structure enters into the mix, then yes, most of what I've been saying is nonsense.

Seems unnecessarily restrictive to me, as just changing it such that each update to the underlying structure requires changing a pointer as well as following the data structure update is much less of a change than, for example, the story that is at the root of this discussion.

For Git, this is roughly what it does. If you do a repack, the new pack is only used once it is done. They rebuild the entire pack, then update the reference to the active pack. If you cancel the process at any point, the old pack is still good and still works. The process of building the structure is heavily "mutation" based, but once it is made, nothing is ever changed.

And you should look into the DLX made parallel. It is very different than how algorithms are made so in most popular literature.


The post showing the balanced tree approach to modeling mutability is now up!

https://news.ycombinator.com/item?id=8027548


Awesome, thanks!


My next post shows the balanced tree approach. It's still in the works, but if you're eager then it wouldn't be difficult to find the draft in the same repo as the code.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: