If anyone's interested in this and where else it might go, I recommend read Don's SO answer, this SO question and the related answers and these papers [3,4] on derivatives and dissections by Conor McBride. There are also several good blog posts and a paper by Brent Yorgey on the topic of combinatorial species, and of course Dan Piponi's blog is a treasure trove of interesting insights at the intersection of math, physics and computer science.
 http://www.youtube.com/watch?v=YScIPA8RbVE (shitty audio for the first minute or 2 - it gets better).
Edit: It should go without saying, but because a few people last time around seemed to think I was trying to pass this off as my own ideas, let me state that there is nothing original in this blog post. It's a repackaging of the ideas and work of lots of other people, all of whom are way smarter than I am.
And I'm fairly mathematically sophisticated (MA a while back with some effort to keep current).
It seems like the constructions "thrown on the floor" everything that happens in the creation of a type. But I can't understand what that does except make simple operations look like a giant mess.
I would add that the stack overflow mentions that the construction is a way to construct an algebraic with "polymorphic parameterization". IE, Haskel uses the laws of algebra and some supplied primitives (AddL AddR, which you mention but don't define!)to calculate A + B. Perhaps if you make that explicit, the article wouldn't have the "floating in the clouds and never coming down" quality that it has for me now.
You can find a somewhat more mathematical explanation here, with explanations of why this is useful:
The most obvious use for derivative types is the automated creation of "zipper" types for functional data structures. Among other uses, zipper types make certain purely functional data structures much more efficient. This is important both for Haskell and ML programmers, and also in situations where you need to leave older versions of a data structure intact, such as when implementing rollback or creating append-only file formats.
If you define a function, call it a "type" and then take the Taylor series of that function, how mind blowing is that really?
My point is that without a motivation to these constructions, they are just constructions. It may be everyone in-the-know understands the motivation already, knows why this thing labeled type really has a "deep" relation to an abstract type-thing. Fair enough but I'm just saying if one omits this motivation, your whole exercise doesn't look interesting to those not in-the-know, OK?
A fun fact: 0^0 is not undefined here. In fact, a^0 = 1 for all a.
One suggestion: say "equivalence" of types, rather than "equality". (You do this, once.)
In principle we could [create? make? construct?] types corresponding to 3, 4, 5 and so on.
Thought you might want to fix it. Thanks, again.