
Efficient Amortised and Real-Time Queues in Haskell - jkarni
http://www.well-typed.com/blog/2016/01/efficient-queues/
======
wyager
In my experience, bankers' Queues (like the one presented in the article) are
_not_ the best persistent queue solution.

The Finger Tree (available in the Haskell standard library under
Data.Sequence) is a really impressive data structure. It's persistent, it has
O(log(min(n,len-n))) access and modification (which means O(1) cons, snoc,
head, and last), and O(log(min(m,n))) concatenation.

In my testing, it was several times faster than the fastest banker's queue
library I found.

~~~
thesz
I had an experience changing code from using priority queue using finger tree
to using sorted lists (regular Haskell's lazy lists, sorted, not even merged!)
and obtaining an order of magnitude speedup and reduction of memory
consumption.

So I would like to advise staying away from finger trees and other interesting
data structures when lists suffice.

In case of queues they are very sufficient.

What article lacks, is the performance comparisons.

------
sdegutis
I probably understand these concepts without knowing their official names. But
man, sometimes I feel really dumb when I see phrases like "Queue Invariant" or
"Amortised Complexity". It's nice that this article explains them simply.

~~~
ashark
> I probably understand these concepts without knowing their official names.

It's been my observation that reading articles/posts written by Haskell
programmers is a great way to be spend a fair amount of time being confused
over things that you already understand, usually for this reason. In that
sense, at the least, they're usually educational.

It's my _suspicion_ that the most common path to becoming a Haskell programmer
is to read too many of these sorts of things, to start using the terminology
in your own writing and speech, then to find that no-one but Haskell
programmers understand you anymore, leaving you no choice.

~~~
danidiaz
"Amortised complexity", a concept so esoteric that is used in the
documentation of basic Java data structures:
[https://docs.oracle.com/javase/8/docs/api/java/util/ArrayLis...](https://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html)

"The add operation runs in amortized constant time, that is, adding n elements
requires O(n) time."

------
bsaul
Never thought lazyness could transform an o(1) operation into an o(n) as a
side effect.

I thought memory management was hard to reason about in pure functional
languages, but i never thought it could affect algorithmic complexity this
badly.

~~~
jkarni
If you're talking about the discussion of head and snoc at the beginning of
the article, laziness just moved the costs around - head becomes O(n) (from
O(1)), but snoc becomes O(1) (from O(n)), because you only pay for the snoc
costs when inspecting the queue with 'head'.

