Hacker News new | past | comments | ask | show | jobs | submit login
Dialogical logic programming (boxbase.org)
85 points by adamnemecek on Feb 11, 2019 | hide | past | favorite | 19 comments



It sounds like the compiler has an SMT solver built in; that is an NP-complete operation and could get slow if it has to solve something complex.

I like the idea though that time in programming languages is the root of very much evil. That is, the programmer must spend time thinking about what order to put the statements.

In many cases the order does not matter so it is wasted cognitive effort. In some cases it matters a great deal.


Check out this language called CHR

https://en.m.wikipedia.org/wiki/Constraint_Handling_Rules

It’s like prolog without the performance overhead thanks to the magic of linear logic.


What connection do you see between CHR and linear logic? I've never seen it described that way.

Anyway, the last time I needed to write a constraint solver in Prolog, I did try it in CHR, but it was way too slow for my needs, so I ended up rewriting it in plain SWI-Prolog. So from this one data point, I don't see where your assertion of "without the performance overhead" comes from.

I also found CHR's multiset semantics quite painful for my application, a lot of my rules had to be careful not to post duplicates of constraints. I would have loved having a single flag for telling the CHR system to just take care of that for me. That would have cleaned up my model and probably been more efficient as well. All of the above is qualified by the fact that I was a relative CHR newbie at the time.

Overall: CHR is quite interesting, but it's no magic silver bullet, and you seem to be overselling it.



OK, yes, a "foundation" in the sense that researchers went and looked at already existing CHR and decided that its semantics was not as clear as it should be, and that it could be explained nicely in terms of linear logic. Not "the foundation" in the sense that the people initially designing CHR decided to base it on linear logic (unlike Wadler's work on linear type systems, say). Interesting, thanks. But this approach is hardly a way to achieve "[no] performance overhead thanks to the magic of linear logic".


It is SAT that is NP-complete. SMT may be undecidable depending on your theories.


It would help comprehension if the ∏ and ∐ operators had been explained.


From what I can tell, they are NOT universal or existential quantifiers, as those symbols are also used in the post. I think they are

https://en.wikipedia.org/wiki/Coproduct https://en.wikipedia.org/wiki/Product_(mathematics)


This is correct. The idea is that you can conjunctively (using the product) or disjunctively (using the coproduct) quantify the variables over the formulas that use them, essentially getting a conjunction/disjuction of the formula, instantiated with each of the concrete values of the variable quantified over.


They seem to be universal and existential quantifiers, which is consonant with the usual notation in mathematical logic and (I believe) in dependent type theory.



The Deep Inference link is a treasure throve of information

http://alessio.guglielmi.name/res/cos/


Wow. Awesome share. Thanks!


A nice article and neat little demo. While it serves as a foil for the point the author was making, I feel like Prolog depth first search (DFS) is almost used as a straw-man by many. It's not that the problem the OP points out never occurred to the inventors of Prolog. Quite the opposite, in fact: in the "game" of evaluating a logic program, the naive search space of possible "moves" is astronomical. In the beginning, no one knew how to deal with it. Selective linear definite clause resolution (SLD) is simply the (obvious in hindsight, but surprising at the time) observation that taking the clauses one at a time is a viable strategy. Once you've committed to taking clauses one at a time, taking the clauses in the order-as-written is a natural choice.

I read a paper which compared DFS, iterative-DFS (DFS with a varying max depth), and breadth-first-search (BFS). The conclusion of the article was that iterative-DFS performed best overall, but that BFS was a viable alternative (the paper author advocates for BFS). However my reading of the graphs and data presented was that for the problems on which it was suited, plain DFS was really not that bad. Intuitively, I would ascribe that to this: you have to do the work eventually, and at the lowest level the CPU is doing one thing at a time anyway.

Where that evaluation strategy falls down, as the OP demonstrates, is when the answer is needed before the clause which computes the answer has been seen. In the Prolog-work-alike C++ library I have experience using (not designed by me), a way this is got around is by having program fragments able to operate with partially-filled in data as much as possible, and delaying evaluation of things that really do need the complete data until the end. This loses some of the "declarative programming" benefit of logic programming: you're having to manually ensure order. But you should really be writing functional code which delays real I/O as long as possible regardless of what kind of programming you're doing.

I spent a lot of cycles (brainpower) trying to think of a way to make this less dependent on the order of clauses, while still being efficient. That's a harder problem than it looks.


This was a fun read, but there are some strawmen and misunderstandings in the article.

> This should be acceptable as conjunction is commutative.

Ah, but Prolog's comma (,) is not logical conjunction, and all Prolog programmers know this! The case is quite similar to, for example, && in C, which is also not logical conjunction. Both of these operators mean "and then", not simply "and". In certain cases, there is no difference, so often you can read either of these as a simple "and", but you cannot do that in general. And yes, we do lazily call it "and" or "conjunction", but we never forget that these words are not to be interpreted in the strict logical sense.

So there is no need to introduce "conjunctive quantifiers" to Prolog, it already has them, written as a comma.

> (∐ X ∈ 0..2) even(X)

> Would be the exactly same as writing:

> even(0) ⊔ even(1) ⊔ even(2)

> All variables in Prolog can be interpreted as this kind of choice-existential quantifiers such as above. Binding a variable is similar to doing a choice. If the choice is wrong, we attempt to backtrack from it as long as we have other choices we can examine.

No, this is an incomplete model of what you can do in Prolog. If you write:

    ?- X = f(A, B).
then A and B are not choice-existentially quantified. They do not, at this point, correspond to choices that you can enumerate and backtrack over. A and B are simply entirely unconstrained variables. Many many people misunderstand Prolog logical variables as something like lazy streams of possible solutions, but they aren't. Not until you do something that tries to bind them. "Reinventing" Prolog in this way gives you something that is not Prolog.

> We may also present prolog's disjunction through disjunctive choice:

> even(0).

> even(s(s(N))) :- even(N).

> Corresponds to:

> even(X) :-

> X = 0 ⊔ (X = s(s(N)) ∧ even(N)).

It also corresponds to:

    even(X) :-
        X = 0 ; (X = s(s(N)), even(N)).
The author may not be aware that Prolog already has (again, not-fully-logical) disjunction?

> It might be this gives us a sound concept of negation in the language, but I haven't concluded that it'd do so because I'm not certain. For example, the negation of the above one might be:

> ¬even(X) :-

> X != 0 ⊓ (X != s(s(N)) ∨ ¬even(N)).

If all you know about N is that X is different from s(s(N)), then there is no relation between X and N, and the recursive call to ¬even(N) will not tell you anything about X. So this won't work.


> If all you know about N is that X is different from s(s(N)), then there is no relation between X and N, and the recursive call to ¬even(N) will not tell you anything about X. So this won't work.

Won't it?

Either:

1. X = 0, in which case ¬even(X) is false. 2. X != 0, in which case

  a. X can't be unified with s(s(N)), in which case ¬even(X)
     is true (since X can then only be s(0)).
  b. X can be unified with s(s(N)) for some N, in which case
     we restart the procedure for that N.

     N is either

     I.  even, in which case we'll eventually evaluate
         even(0), which is a base case.
     II. odd, in which case we'll eventually be unable to
         unify the argument X with some s(s(N)). By negation
         as failure[^naf], we'll eventually prove ¬even(X).
[^naf]: https://en.wikipedia.org/wiki/Negation_as_failure


You're right, from the point of view of a sufficiently strong logic and a sufficiently strong prover for that logic!

But:

> N is either ...

In a type system like Prolog's, N can also be any of:

    f(Y)
    apple_pie
    N  (a free variable)
You assume a type and mode system in which X and N must be bound, and they must be bound to ground natural number terms. You also assume that the logic programming system can deduce positive information from the failing unification on an infinite domain (that X must be s(0) if it is not 0 and not of the form s(s(N)))).

It's possible that the article's author also had all these implicit assumptions, but it's not clear to me that they did.


Right, thanks for the insight.

A further thought: `apple_pie` wouldn't actually be problematic, in the sense that we could deduce `¬even(apple_pie)` from the way it's currently written. This is because `apple_pie` is obviously neither `0` nor `s(s(N))`. Of course, this might not be desirable philosophically since it might be preferable to consider applying `even` to `apple_pie` nonsense since they are of different types (informally speaking).

Is this right? Am I missing something?


> A further thought: `apple_pie` wouldn't actually be problematic, in the sense that we could deduce `¬even(apple_pie)` from the way it's currently written.

Yes. We can have the same behavior with the following Prolog implementation:

    not_even(X) :-
        X \= 0,
        (   X \= s(s(_N))
        ;   X = s(s(N)),
            not_even(N) ).
(This is a translation of the article's definition, except that I added the "X = s(s(N))" condition to make the case distinction explicit and actually bind N in the second branch. It wouldn't work without this for the reason I mentioned above, of deducing this equality from a failure.)

    ?- not_even(s(s(0))).
    false.

    ?- not_even(s(s(s(0)))).
    true .

    ?- not_even(apple_pie).
    true .
> Of course, this might not be desirable philosophically since it might be preferable to consider applying `even` to `apple_pie` nonsense since they are of different types (informally speaking).

Right. This is one of the reasons why some Prolog extensions and Prolog-like languages like Mercury add static type systems. It makes little sense to ask whether apple pie is even or odd.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: