Haskell's do notation is syntactic sugar over monads which effectively allows you to write 'imperative-looking' code while still carrying a local state forward without mutation. The Wikibook does a pretty good example of explaining what this looks like (though I'm guessing you already know this).
Now, obviously it is true that one of do notation's advantages is the same as any other monad usage: it allows us to explicitly sequence events in a lazy language that otherwise offers no (obviously intuitive) guarantees on evaluation order. In that sense it's nothing more than sugaring over the otherwise necessary usage of a lot of ugly >> and >>= operators everywhere in increasingly annoying indentation.
But the other thing it offers is a syntactic sugaring over carrying state forward into successive computations (like the State monad), which still carries at least some useful sweetness in a language that is otherwise functionally pure, which is why F# generalized the concept even further to computation expressions.
Looked at another way, do notation, or something like it, can be used to sugar over something that rather more looks like the Clojure ->/->> operators, where the initial value is essentially a local namespace. Much like the threading macros, the result even appears to be doing a kind of mutation, even though it's actually doing nothing of the sort.
This kind of thing turns out to be useful for games, for instance, as the linked State monad example above does. In games we often have a main update loop, where we have to do several successive operations on our game that might change the state. We can do this a number of ways, but one way is with something like do notation, where for instance (in some hypothetical language) we might do this:
do with gameState
oldGame <- gameState
gameState <- checkInput
gameState <- tick
if gameState != oldGame
You can write whole imperative, mutation-riddled languages in purely functional ones this way. There's an implementation of BASIC that runs in the Haskell do notation.