Adding new datatypes vs. adding new functions is the core tension between OOP and FP. It's really easy to add new functionality to existing datatypes in functional programming, and really easy to add new datatypes in OOP.
OOP has problems adding existing functionality to new types. Interfaces are a go at the problem but they fall short for several reasons. Clojure's protocols are a better solution (as far as I can tell). Clojure also provides multi-methods and hierarchies for more sophisticated function dispatch. Their use and the reasoning behind their design is well documented on Clojure's website.
Case statements complect who/what pairs, so simplifying them by disentangling them should be a good idea. Clojure has really neat runtime polymorphism ( http://clojure.org/runtime_polymorphism ) that is one way to accomplish this without necessarily wading into type hell, and your multimethods end up looking like data.
"Clojure supports sophisticated runtime polymorphism through a multimethod system that supports dispatching on types, values, attributes and metadata of, and relationships between, one or more arguments."
It exchanges one form of duplication for another, not eliminates it.
The real "problem" is that textual code is generally one-dimensional in nature, but we are interweaving two dimensions of option combos. It's allegedly a limit inherent to textual code, regardless of paradigm used to write the code.
I'm currently thinking of language with "code" laying in database of sort, and text is just a view of real data. Of course, there would be some text serialization, but it wouldn't be the "true way" for programming it...
At least in functional programming, some things just happen to be case analyses, and there ain't much you can do to get around it.
Consider the following (scheme) definition for the ``eval'' function. (This is a conditional and not a case statement, but it's still type-based dispatch, so I think it qualifies)
(define (eval exp env)
(cond ((self-evaluating? exp) exp)
((variable? exp) (lookup-variable exp env))
((assignment? exp) (eval-assignment exp env))
((definition? exp) (eval-definition exp env))
... ; you get the picture
))
The input to this is some scheme expression. If it's a number or string (for example), that's the result (it's self evaluating). On the other hand, if it starts with the symbol ``define'', it's a definition and we define the variable in the environment.
Scheme doesn't support polymorphism, but as far as I can tell, even if it did this couldn't be changed to be polymorphic without changing what the function actually does. At the end of the day, if the input to your function is diverse enough, you're probably going to have to perform some sort of dispatch based on the type of that input.
On the other hand, I have a fairly small amount of experience with OO code, so I could be totally off base here... (please let me know if that's the case).
In SICP (where I assume this is taken from) they actually solve this problem. They have a dispatch function (as multi-methods have in Clojure) and a map from dispatch values to action functions. In that case you can extend the evaluator by adding new <dispatch value, action function> pairs to the map, which can be done in runtime.
It may not feel like it, but that is pretty much what languages like Java do. At run time, the type of the receiver is evaluated and the correct method is selected based on that. So we have the dispatch function, and a way to map results of that dispatch function to action functions. The only difference is that because of static typing, we cannot extend the <dispatch value, action function> map at run time, because we cannot create types at run time.
There's some very interesting research on resolving this issue: http://www.cs.ucla.edu/~todd/research/icfp02.html