A nice way to think about how they can be used (and the difference between parallel and sequential evaluation discussed towards the end of the article) is in handling promises:
The parallel or sequential composition of a list of promises returns a promise of a list.
The parallel applicative composition resolves all promises simultaneously.
So, perhaps they're design patterns, but I think calling them that puts them on too high a pedestal. True "functional" design patterns I think are much deeper. They include things like recursion schemes and the Types of Data , module design, purely functional data structures, handling laziness and strictness, etc.
As others have stated, your parallel v. sequential composition model is murky because it conflates dependency time versus real time. This is not how Monads and Applicatives work or differ at all!
But they do operate in sequence and in parallel—it's in "dependency order" though.
If you think of monads and applicatives as being composed of "layers" of action where each action pairs some instruction (like "print this string") and a "next action slot" which can contain whatever we like then we can see how monads and applicatives differ.
In applicatives, you have a list of these layers—the first next-action-slot contains a function and the remaining elements in the list contain each parameter of the function in their next-action-slot. Applicatives are "executed" by running all of the actions in the list in sequence (weird right?) and then applying all of the arguments to the function.
In monads, you "stack" layers. By stack I mean we place a new layer in the "next action slot" of each layer so that to find the next action you have to keep digging down and down through the layers. This means that each later layer can depend upon results of the "action" performed at earlier layers given monads their "dependency" structure.
These analyses are known as the free monad and free applicative. I really encourage everyone study them if they're interested.
(+) <$> a <*> b
x <- a
y <- b
return (a + b)
First, you probably meant:
do x <- a
y <- b
return (x + y) -- not: a + b
a >>= \x -> b >>= \y -> return (x + y)
bind a (\x -> bind b (\y -> return (x + y))
The `Applicative` interfaces statically guarantees there is no data dependency and might therefor have a smarter implementation. For `Monad` this simply isn't possible. It's a subtle, but very important, difference.
For the most part, only "special" monads like IO enforce sequential computation.
It sends state updates "back in time" to compute fixed points. The above article shows how you can compute the fibonacci stream using it.
No, IO doesn't either, because of laziness. If you need a demonstration, then, in the IO monad, open a file (using the functions in System.IO), read the contents (without doing anything else that depends on them), close the file, and then use the file contents.
IO still works in data dependency order.
The whole cottage industry of streaming monadic operators like enumeratees, pipes, conduits, io-streams, etc takes "fixing" Lazy IO as a major design goal or inspiration.
Lazy IO depends upon using the function `unsafeInterleaveIO` which, as its name suggests, is considered potentially unsafe---and your example demonstrates why!
x <- get
z <- get
Most monadic binds aren't sequential. It's only necessary for certain monads (like IO). Most monads can be evaluated in any order.
(>>=) :: Monad m -- for any monad,
=> m a -- take wrapped value
-> (a -> m b) -- and a function which needs it unwrapped
-> m b -- unwrap it, and apply that function
(>>=) :: Monad m -- for any monad,
=> m a -- take a wrapped value
-> ((a -> m b) -> m b) -- and return a function which can take an
-- unwrapped value and a wrapped one and
-- return another wrapped one
Edit: I think "dependant" is also a typo. Apparently in British English it is used to mean a dependent in the sense of a child who relies on a parent for support, but when it's used as an adjective, "dependent" is still the correct spelling, even in British English.
Honestly I think the differences in that example are pretty superficial. The real advantage of applicative is that because the assumptions are weaker, some things that aren't Monads are Applicatives.
For most instances the difference is superficial, but for some problem domains the difference can have a pretty big impact.
All of this is because Applicative composition (using apply) statically guarantees there is no data dependencies between the (possibly) effectful computations. Monadic composition (using bind) can (but might not) introduce a data dependency which can only be reasoned about at runtime.
Facebook's Haxl project is an interesting example of this: http://www.cs.ox.ac.uk/ralf.hinze/WG2.8/31/slides/simon.pdf
As far as I'm aware, nearly every functor admits at least one applicative structure and oftentimes many choices thereof.