Q. If you append to a linked list, you can't avoid creating an entirely new copy of the list, can you? (prepending is OK, because you reuse the old list).
Or can Haskell's laziness help with this? Can it somehow avoid actually creating the new copy, but instead just return the correct value when you reach the end of the list? So that the new copy is never actually made, but it just behaves as if it was?
There's a trick for efficient appending that's somewhat analogous to laziness. It's quite clever. Sadly, I cannot claim to have come up with it, and I don't know who did.
Instead of working with [a] lists, work with [a] -> [a] prepender functions. So the analogue of [1, 2, 3] is f: \x -> [1 2 3] ++ x. Then the analogue of the append of f and 4 is f . (\x -> [4] ++ x), where the . represents composition. Appending using these analogues, I believe, is O(1) instead of O(n).
To "evaluate" an element of this "closure space", just apply it to the empty list.
You can also re-use some form of balanced binary trees where all interesting operations are doable in log(n) and adding requires about log(n) (mostly internal) nodes to be rebuild. That is worse than the optimum O(1) for linked lists you can reach --- but it's easy to come up with and often good enough.
Hey, you can even simulate Haskell's lazyness to implement some Persistent Data Structures efficiently in Java.