And I'm fairly mathematically sophisticated (MA a while back with some effort to keep current).
It seems like the constructions "thrown on the floor" everything that happens in the creation of a type. But I can't understand what that does except make simple operations look like a giant mess.
I would add that the stack overflow mentions that the construction is a way to construct an algebraic with "polymorphic parameterization". IE, Haskel uses the laws of algebra and some supplied primitives (AddL AddR, which you mention but don't define!)to calculate A + B. Perhaps if you make that explicit, the article wouldn't have the "floating in the clouds and never coming down" quality that it has for me now.
You can find a somewhat more mathematical explanation here, with explanations of why this is useful:
The most obvious use for derivative types is the automated creation of "zipper" types for functional data structures. Among other uses, zipper types make certain purely functional data structures much more efficient. This is important both for Haskell and ML programmers, and also in situations where you need to leave older versions of a data structure intact, such as when implementing rollback or creating append-only file formats.
If you define a function, call it a "type" and then take the Taylor series of that function, how mind blowing is that really?
My point is that without a motivation to these constructions, they are just constructions. It may be everyone in-the-know understands the motivation already, knows why this thing labeled type really has a "deep" relation to an abstract type-thing. Fair enough but I'm just saying if one omits this motivation, your whole exercise doesn't look interesting to those not in-the-know, OK?