
Hello, declarative world - dkarapetyan
http://codon.com/hello-declarative-world
======
putzdown
This is a well-expressed defense of declarative concepts but the title itself
highlights the Achille's heel of these ideas: how, after all, do you simply
make the computer do something, like print "Hello, World" at a particular
place and time? Declarative techniques are great for the things that they're
great for, but programmers who hate side effects have the problem that every
program that does anything observable does it through side effects. So I
continue to sit on the declarative fence.

~~~
hvs
I think us programmers have a tendency to see things as all or nothing, even
while we work daily with a number of technologies and languages that do things
differently. There is no reason that everything has to be declarative while
benefiting from the advantages of declarative/functional languages.

Most programmers use SQL on a daily basis and we don't hear arguments that it
should be procedural to be inline with our other languages.

~~~
jerf
I ranted a while ago on HN about how I don't believe in the existence of
declarative:
[https://news.ycombinator.com/item?id=3507281](https://news.ycombinator.com/item?id=3507281)

As I re-read it, read this article, and pondered some more, I'm revising my
view a bit, and beginning to think of "declarative" as something like
probability. This is debatable, but I tend to agree with the idea of
probability as a measure of knowledge of a given agent, and all probabilities
must be understood in the context of some agent for whom those are the correct
probabilities, as opposed to being free-standing numbers.

Similarly, declarative can be seen as relative to the programmer. Haskell code
that looks like

    
    
        map (+1) . filter isEven . map (*2)
    

may be declarative to one programmer while writing it, who doesn't care about
the details, yet be essentially imperative to someone who, say, is tasked with
optimizing that, and cares about exactly what the compiler does with that code
and how. It's not a characteristic of the code, but the programmer's
relationship to the code, which may even change over time.

From this perspective, most things that proudly wear the label "declarative"
can be seen as over-forcing you to be ignorant about their implementation
details by making this unnecessarily opaque, for the purpose of wearing that
label. Still not a tradeoff I like, vs. the same tech except transparent.

~~~
tel
Another way of looking at this, which is well loved in the PL community, is
that there is no one exact interpretation of a fragment of syntax. Haskell is
interesting because it has both operational and equational semantics and you
can choose which you'd like to apply at any given time.

What's nice about this is that it removes the "programmer" from the equation
and instead augments language with a bouquet of interpretations. Another
Haskell interpretation, e.g., is the static nature of its typing relation.

This is a good way of "understanding monads" as well. They're an algebraic
structure which captures just enough to let us talk about a variety of
sequential/operational semantic choices for our syntax. At the end of the day,
though, they're just data and we can imbue them with all of the semantic
notions we enjoy via the interpreter pattern. This is even more emphasized
with the "Free Monad" pattern which lets the structure of the monad be very,
very close to the syntactic notion.

~~~
jerf
"What's nice about this is that it removes the "programmer" from the equation
and instead augments language with a bouquet of interpretations."

Implicitly, I'm invoking the idea that a programmer is looking at a particular
runtime.

I feel like I have to put that in, or the whole distinction between
declarative and imperative completely collapses anyhow. "X.sort()"
conceptually simply invokes sorting behavior, possibly with a certain set of
guarantees, but in practice, it's very unusual for that to be running so
abstractly that the programmer can not penetrate down to relevant runtime
details. And when that does happen, generally forces are created that start
pushing all the switchable runtimes towards bug-for-bug compatibility, e.g.
the web browser's Javascript runtime.

It's not a "bad" thing that it collapses in that case, it just does, that is,
consider this the mathematical "collapses" rather than the architectural. You
have to have a certain amount of specificity in my opinion or there is no
reasonable way to discuss "declarative" vs. "imperative".

~~~
tel
I think you do need "bug for bug" compatibility between semantics—because the
other choice is worse—but that drives home what it takes to genuinely have
alternative semantic choices: sufficient concision of definition to be able to
_maintain_ the choices! This is standard for programmers, though. It's
architecture like any other.

------
kragen
It seems bizarre to me that someone could write an article about relational
programming without even _mentioning_ miniKanren, which can do things like
find quines, given a relational specification for a language interpreter:
[http://webyrd.net/quines/quines.pdf](http://webyrd.net/quines/quines.pdf) —
and it’s in the Clojure standard library as core.logic:
[https://clojure.github.io/core.logic/](https://clojure.github.io/core.logic/)

miniKanren solves a lot of the limitations that systems like Prolog had, at
the cost of some performance. The old miniKanren web page explains, "KANREN is
a declarative logic programming system with first-class relations, embedded in
a pure functional subset of Scheme. The system has a set-theoretical
semantics, true unions, fair scheduling, first-class relations, lexically-
scoped logical variables, depth-first and iterative deepening strategies. The
system achieves high performance and expressivity without cuts."

[https://stackoverflow.com/questions/28467011/what-are-the-
ma...](https://stackoverflow.com/questions/28467011/what-are-the-main-
technical-differences-between-prolog-and-minikanren-with-resp) talks a bit
more about the differences.

(Also, the article didn't mention Prolog. What's up with that?)

(I don't have any experience with miniKanren myself.)

~~~
ryanmarsh
I'm not sure whether you read the article or not, or if the author updated it
in between the time you read it and posted your comment, but I finished
reading it before your comment was posted and the author did, in that frame of
reference, in fact mention miniKanren.

~~~
harperlee
I'm fairly sure I perused the article before this comment was added here and
they already had that reference.

------
Jemaclus
This is pretty well done. I've always been a bit lost on the differences
between imperative and declarative programming. Each time I've asked, I've
gotten incredulous "You mean you don't KNOW?" And then they start off with
technobabble buzzwords that are meaningless to me.

I appreciate this post, because it works from first principles (close enough)
and moves toward more abstract concepts. Good storytelling guide here.

------
0xdeadbeefbabe
> We’re still stuck with mostly “von Neumann style” languages that talk about
> state and assignment and memory and stuff — computer things, not idea
> things.

Maybe there is a good reason for this beyond something simple like inertia? On
the other hand I sure don't want to be stuck doing backward things—wait a
minute—is this more pop culture masquerading as insight?

~~~
lucio
1.With “von Neumann style” you are closer to the metal.

2.With "declarative", (e.g. SQL) you're preparing input data -NOT describing
an algorithm- for a runtime or a compiler or a server, which in turns will run
“von Neumann style” algorithms composed from your declarations.

If you need to optimize the last drop of resource usage and speed you use 1).

~~~
nickpsecurity
Maybe. There were LISP machines developed that did all that natively. The
metal was a high-level language of incredible power with good implementations
doing low-cost abstractions via macros. Almost all investment went to the
other architectures for sheer price/performance and backward compatibility.
Reliability, maintenance, integration of 3rd party code, security, etc weren't
a concern at all. Now they are: even mainstream is tossing much of the old
stuff where possible. Your same argument might have worked in reverse had
several billion dollars of engineering gone into hardware designed to
implement ML, Scheme, Haskell, and so on.

Besides, there's little debate to be had on what's the best model: all of
these are built on hardware that's interacting functional programs
(combinatorial logic) and finite state machines (sequential logic). How
they're combined varies on a chip by chip basis. Underneath, though,
everything boils down to a careful combination of functional and semi-
functional programming implemented in silicon. Except analog which is purely
functional. ;)

Note: This brings out another angle whereby we use tens of thousands to
billions of circuits running concurrently in a functional way to simulate a
sequential, imperative machine that does way less in a clock cycle. This is
why porting the algorithms onto FPGA's almost always has significant speed-up
unless the imperative machine is insanely optimized (i.e. Intel/AMD/IBM). It's
also why even prototypes like SHard produce efficient, better-than-generic
hardware from functional language specifications of algorithms. Maybe we
should just go functional all the way and invest money into CPU's which do
_that_ with utmost efficiency, eh?

~~~
0xdeadbeefbabe
> Besides, there's little debate to be had on what's the best model.

Are you saying there is no debate that functional is the best model because
that is what is going on in the hardware?

I haven't heard that point of view before.

Does Knuth's use of MIX in the AOCP mean he doesn't share that view or does it
mean something else?

~~~
nickpsecurity
I'm saying it's a path worth exploring. The research on LISP machines,
synthesis, splitting of apps between CPU and FPGA's... all this showed
functional approach got the job done better with less hardware and the well-
known software benefits. Even the workhorse instructions of the imperative
processors operate in a functional way when you look at how the processors
actually work. There's so much overhead in supporting the illusion of
sequential, imperative programming that it might be better spent utilizing
hardware to support concurrent, functional programming more efficiently.

------
richardjdare
I've always found 'declarativism' in general to be very hard to understand. I
was terrible at maths until I became a programmer and was able to concretely
explore the mechanics of number using imperative programming languages. For
many years, post-high school mathematics was incomprehensible to me and
languages like Haskell seemed like absurd contrivances when placed next to the
concrete reality of C compilers and CPU clock cycles.

It was only by butting my head against mathematics again and again, and by
easing my way into functional programming via Javascript that I began to
understand declarative thought, and what it meant to perceive logic in a
static picture rather than as the expression of some concrete action. I still
don't _get it_ it's foreign to my way of thinking, but I know what it looks
like now.

I enjoyed this article a great deal, but my hackles did rise somewhat at the
author's assumption that declarativism is somehow closer to human thinking
than imperativism. It is more abstract, yes, and maybe more familiar to a
natural mathematician, but I do not believe declarativism has a special status
with regard to human thought.

I don't think it would be a poor hypothesis to suggest that many people reason
using a much softer, more "narrativ-istic" kind of thinking than is
represented by mathematics. This kind of thinking is closer to imperative
programming which often seems like you are telling a story about how to solve
a problem - the details of the computer's memory and CPU being the
indispensable forest and woodcutter's axe of the tale.

I think that a lot of the proponents of declarative programming (and
functional programming for that matter) are mathematically talented people who
are cognitively inclined to understand things in that way - just as the
positive features of imperative (and object oriented) languages are similarly
attractive and seemingly fundamental, to other kinds of thinkers.

~~~
xamuel
I'm a mathematician and I'll be the first to say, this "declarative =
mathematical" meme is completely inaccurate. If you look at most mathematical
theorems and proofs, they are mostly imperative.

Take for example the proof there are infinitely many primes. "Assume there are
only finitely many primes (assume: imperative). Let them be called p_1,...,p_n
(let: imperative). Let q=(p_1 _..._ p_n)+1 (let: imperative). Then q is not
divisible by any of p_1,...,p_n, so q is prime (this is the only non-
imperative line in the proof). Contradiction, discard the assumption and
conclude there are infinitely many primes (imperative)."

The biggest "non-imperative" feature of mathematics is the way functions are
first-class objects. But that's only considered to be "non-imperative" because
of a historical fluke, there's no intrinsic reason why "first class functions"
= "non-imperative", it's just that the early languages happened to be that
way.

------
decasia
I thought this was interesting and very well put together. Can anyone with
more of a CS background provide any additional context here? I gather from the
article like there is a small but growing domain of these "relational"
languages.

~~~
sevensor
Like the author said, this is an old idea that's recently been revived. (I
agree that the article was quite well done.) Most of the ideas he described,
like goals and unification, have been a feature of logic programming for ages.
Prolog is the classic language in logic programming --- it attracted major
interest during the AI bubble in the 1980s and survives mainly as an
educational language today. It's worth a look, particularly since there are so
many educational materials available.

------
dwenzek
Here is a very insightful post: "What, If Anything, Is A Declarative
Language?" ([https://existentialtype.wordpress.com/2013/07/18/what-if-
any...](https://existentialtype.wordpress.com/2013/07/18/what-if-anything-is-
a-declarative-language/)).

Written by a functional programming fellow, Robert Harper, this post is not a
rant around the claim we can't escape the procedural effectful world. It's
rather a call for the awareness that the "how" is a fundamental issue in
programming.

------
kpmah
This looks like it's effectively Prolog, and the problems with this way of
programming can be found in criticisms of that class of programming languages.

I haven't written a Prolog program of any complexity myself, but it seems a
common complaint is that the programs can be hard to reason about and debug.
Your solutions are always running on top of a unification algorithm.

~~~
harperlee
Much of the criticisms that I've read about prolog are very specific about
prolog (I am not an expert here, though).

Further, the fact that prolog-like languages (be kanren-like languages part of
those or not) have issues should not be a basis for discarding all of them.
Useful niches can be found, and their problems might be solved.

More in general, the idea that I can take a function, and provide an argument
less, and a result, and the function is inverted (in a mathematical sense) for
free, seems very, very powerful. The main alternative is to have two
functions, one the inverse of another, that use the same building blocks for
reuse, but that puts a cognitive overhead to the program, whereas a person
understands most inversions quite easily.

~~~
nickpsecurity
I agree. Prolog isn't logic programming: it's a specific approach to first-
order, logic programming. So, it's failures can't be considered failures of
first-order or other logic programming by default. One has to look at the
nature of the failure to determine where the blame is. Then see if there's a
way to fix that. And if not, only then say "This appears to be a weakness of X
that justifies using a different tool."

That said, most work that's successful avoids first-order logic in favor of
tools such as Isabelle/HOL or Coq. They also support extraction to Ocaml, etc
w/ certified compilers in works for them. _Maybe_ better to use those for
logic programming. I'm not a domain expert in that field so can't be sure.

For people doing first-order, though, the Mercury language and Visual Prolog
app are best I know of with plenty examples.

------
davexunit
Now, go pick up The Reasoned Schemer!
[https://mitpress.mit.edu/books/reasoned-
schemer](https://mitpress.mit.edu/books/reasoned-schemer)

------
timothycrosley
I think the best way to move away from the way computers think, is just to
build higher level functions and utilities. For example:
[https://github.com/timothycrosley/hug](https://github.com/timothycrosley/hug)

------
notNow
Instead of the declarative/imperative paradigm and the ongoing shift to more
functional flavoured languages and more abstractions crammed under the hood, I
prefer instead multi-paradigm languages like JS to be the languages of the
future.

Multi-paradigs languages, if done right, combine the best of two imperative &
declarative worlds beside they're very friendly to beginners and they don't
have this intimidating quality that other "pure" languages like Haskell have.

Also, procedural coding is easy to do and carries low mental overhead in
writing and not every programming problem has be modeled in functional terms
or worldview to be solved. Sometimes, it's economical and efficient just to
get the work done in two lines of code without going through all those
functional hoops just to capture a field value in some form.

~~~
gizmo686
Even multi-paradigm languages need to choose a dominant paradigm.

Javascript supports functional style programming, but it is still in the
context of an imperitave language. In the same way, Haskell supports
imperitive programming, but it has to be done within the context of a
functional language.

Also, in my opinion, the intimidating part of Haskell is not its purity, but
rather its type system.

