Author of the blog post here. It's amusing to see this submitted by Don Stewart, since it was his explanation of algebraic data types that first made them "click" for me! I think I forgot to mention him in this post, but I remembered in the talk.
If anyone's interested in this and where else it might go, I recommend read Don's SO answer, this SO question and the related answers and these papers [3,4] on derivatives and dissections by Conor McBride. There are also several good blog posts and a paper by Brent Yorgey on the topic of combinatorial species, and of course Dan Piponi's blog is a treasure trove of interesting insights at the intersection of math, physics and computer science.
Edit: It should go without saying, but because a few people last time around seemed to think I was trying to pass this off as my own ideas, let me state that there is nothing original in this blog post. It's a repackaging of the ideas and work of lots of other people, all of whom are way smarter than I am.
I don't mean to be negative, it's probably the material as much as you but as I read the article, I find a voice repeating "what is the point, where is this leading, why, why?". I mean, as I read the article, my impression is "here's verbose syntax that does describe types but whose relation to the tasks of an ordinary programming language, to getting things done is left unexplained for too long for attention span, if it is explained at all".
And I'm fairly mathematically sophisticated (MA a while back with some effort to keep current).
It seems like the constructions "thrown on the floor" everything that happens in the creation of a type. But I can't understand what that does except make simple operations look like a giant mess.
I would add that the stack overflow mentions that the construction is a way to construct an algebraic with "polymorphic parameterization". IE, Haskel uses the laws of algebra and some supplied primitives (AddL AddR, which you mention but don't define!)to calculate A + B. Perhaps if you make that explicit, the article wouldn't have the "floating in the clouds and never coming down" quality that it has for me now.
The most obvious use for derivative types is the automated creation of "zipper" types for functional data structures. Among other uses, zipper types make certain purely functional data structures much more efficient. This is important both for Haskell and ML programmers, and also in situations where you need to leave older versions of a data structure intact, such as when implementing rollback or creating append-only file formats.
Aside from the practical interest in defining zippers as noted in another response, isn't it enough that there be this interesting and (by many, anyway) unsuspected correspondence, that holds fairly deep down? I mean---taking the Taylor series of a type! After all, we are told that "anything that gratifies one's intellectual curiosity" is suitable for an HN submission, no?
Well, the correspondence sounds interesting but if remains just on the level of the unexplained, it is hard to see it really being interesting.
If you define a function, call it a "type" and then take the Taylor series of that function, how mind blowing is that really?
My point is that without a motivation to these constructions, they are just constructions. It may be everyone in-the-know understands the motivation already, knows why this thing labeled type really has a "deep" relation to an abstract type-thing. Fair enough but I'm just saying if one omits this motivation, your whole exercise doesn't look interesting to those not in-the-know, OK?