I dunno, not being able to reason about performance is a pretty big deal. I've only ever fantasized about learning haskell, but I can tell you that this quality is what makes sql (or god forbid an ORM) a pain to deal with in production. The explain plan for a given sql query can change on you for no reason other than your table has grown. You can go from using an index for speedy queries to a full table scan because your sql engine decided to no longer use any of your indexes.
I don't know what the equivalent cases might be for a more general purpose language, but I strongly suspect it's a problem all declarative languages suffer from.
>The explain plan for a given sql query can change on you for no reason other than your table has grown
True, but that may well lead to more predictable performance. If you write a procedural program and then the dataset grows, your programm might slow down dramatically while the new execution plan generated by the SQL engine would adapt and keep performing well (...ideally).
You could argue that a procedural programmer could anticipate the growth of the dataset and choose a suitable algorithm from the start. But if you have not one but tens of datasets and value ranges whose size relative to each other determines what the best algorithm is then manual optimization becomes very difficult indeed.
A Haskell compiler doesn't use knowledge of input data to optimize the program though (at least not to the extent a database engine does). So I think a relational database engine is more useful in this regard.
Haskell is not declarative, however, so the programmer actually has control over the evaluation order through ordering demands for the results of computations. It does, however, take a little longer to fully grok how lazy code evaluates compared to a simple left-to-right eager evaluation a la Scheme, so it can be a bit of a rough start (and many usually only decide to take the full time to understand laziness when they're in the middle of a critical issue due to it). However, a large domain of problems (for example, search) are more easily expressed in a compositional manner using laziness.
Forgive my ignorance, I really only have a passing familiarity with haskell, and I did a quick search before trying to lump it in with sql as declarative, but http://www.haskell.org states above the fold, "declarative, statically typed code". If it's not generally considered a declarative language, is there a better way to describe the language?
I think "declarative" exists on a spectrum. There's no clearly defined line between declarative and nondeclarative that I know of. For instance I would say Haskell < Prolog < SQL are on one end of the declarative spectrum, but in each case you do end up reasoning about the underlying execution model.
This is a fair statement. What I (and I believe most others) mean by declarative programming is more specifically logic-and-constraint programming, but in many ways you could view Haskell's type system, typeclasses, and laziness in combination as being more declarative than, say, a purely functional subset of Scheme. To the grandparent post I originally responded to - If you wanted to take a look at a more purely declarative language, see miniKanren
https://docs.racket-lang.org/minikanren/index.html
There exists an extension with constraints as well, cKanren
http://scheme2011.ucombinator.org/slides/Alvis2011.pdf
To try to summarize, a declarative language is one in which you tell the computer what you want, but not so much how it is done. Logic Variables (Unification) are the key differentiators of these systems.
Take a look at chapter one of CTM for a really nice breakdown of the different orthogonal components to a language. I have a feeling differentiable programming and (dependent+) types belong on that list, too.
As an additional question of my own for anyone reading - is there anything that should be on that list thanks to modern CS research, that isn't?
> It does, however, take a little longer to fully grok how lazy code evaluates
I doubt it would take anywhere near as long if students were told from the get-go that the only thing triggering evaluation is the destructuring as it happens during pattern matching.
Unfortunately it seems like most teaching resources would prefer to uphold a sense of mysticism.