Hacker News new | past | comments | ask | show | jobs | submit login

For anyone who finds this topic interesting, you might be interested in the following resources:

http://minikanren.org/

The second edition of `The Reasoned Schemer` (MIT Press, 2018):

https://mitpress.mit.edu/books/reasoned-schemer-second-editi...

The Clojure/conj 2016 talk Greg Rosenblatt and I gave on Barliman:

https://www.youtube.com/watch?v=er_lLvkklsk

`A Unified Approach to Solving Seven Programming Problems (Functional Pearl)`, William E. Byrd, Michael Ballantyne, Gregory Rosenblatt, Matthew Might (ICFP 2017) (Open Access):

https://dl.acm.org/doi/10.1145/3110252

and the related talk:

https://www.youtube.com/watch?v=o3AHnyEf7IE

and Nada Amin's interactive version of the paper:

http://io.livecode.ch/learn/namin/icfp2017-artifact-auas7pp

Cheers,

--Will




Hi, Will. I noticed this:

>> Your program can be more efficient, you can have more expressive control but if you are not careful, your program can be unsound. In other words: there may be an answer to your query that you do not find. So an analogy would be in a database lookup. If you are doing a database query, maybe there is actually information in a table that you should find, but you do not find it.

To clarify, that's incompleteness. Unsoundness is when an inference rule returns wrong results. Incompleteness is where it doesn't return all correct results. As you say, unification without an occurs check is unsound and Prolog's depth-first search is incomplete.

I wanted to ask, because you brought it up in the interview, how does minikanren (or minikanrens?) handle the occurs check? You said that you are interested in trying to push the relational aspect as far as possible, recognising that this gives up efficiency - can you quantify to what extent minikanren gives up efficiency and generally say a bit more about the trade-off between (if I understand it correctly) efficiency and relational (declarative?) purity?


Hi YeGoblynQueenne,

I always read your logicy comments and posts--thank you!

> To clarify, that's incompleteness. Unsoundness is when an inference rule returns wrong results. Incompleteness is where it doesn't return all correct results. As you say, unification without an occurs check is unsound and Prolog's depth-first search is incomplete.

Thank you. Yes, I misspoke.

For anyone interested, this video gives a nice overview of soundness and completeness, in the context of logic: https://www.youtube.com/watch?v=HeQX2HjkcNo

> I wanted to ask, because you brought it up in the interview, how does minikanren (or minikanrens?) handle the occurs check? You said that you are interested in trying to push the relational aspect as far as possible, recognising that this gives up efficiency - can you quantify to what extent minikanren gives up efficiency and generally say a bit more about the trade-off between (if I understand it correctly) efficiency and relational (declarative?) purity?

miniKanren performs unification with the occurs check. This is one of the tradeoffs miniKanren makes: sound unification, but it can be expensive. The interleaving complete search is another tradeoff: the search uses more time and memory than does depth-first-search, in those cases in which DFS would be sufficient. Another tradeoff is that miniKanren avoids the use of non-logical / extra-logical operators. There are no cuts, projected variables, var tests, etc., in the pure parts of the language (which is the version I always use). If you want to perform arithmetic, you either have to encode arithmetic as a pure relation (as Oleg Kiselyov has done), or add a fully-relational constraint system for arithmetic to miniKanren (as we did in cKanren). Or, you can add a delayed goals mechanism, although this can lose completeness.

There are lots of variants of miniKanren at the this point: alphaKanren (nominal logic programming); cKanren (constraints); probKanren (prototype probabilistic Kanren); dbKanren (Greg Rosenblatt's extension to handle graph database operations, used in mediKanren 2); etc. I hope we can integrate all of the features into a single implementation some day.

Cheers,

--Will


>> I always read your logicy comments and posts--thank you!

You're welcome and thank you for your work on logic programming :)

Thank you also for the discussion of the trade-offs in minikanren. I appreciate the quest for declarative purity, although I'm a very impure individual myself (the dynamic database is evil and I'm going straight to hell :). In general, my opinion is that Prolog's approach is pragmatic, but it was designed in the 1970's when hardware was weaker. With modern hardware, there may be different options that better address the need for purity without sacrificing efficiency.

Modern Prolog interpreters generally have an option to enable the occurs check (for example, in Swi-Prolog there's an environment flag to enable it for all unifications) and there's also the ISO predicate unify_with_occurs_check/2, that can be used in meta-interpreters, I guess. I think this addresses the soundness concern.

Regarding incompleteness, some modern Prologs use SLG-resolution (a.k.a. tabling) to at least avoid the incompleteness resulting by infinite left-recursions. This still allows infinite right recursions but it's easy to get in such a situation with recursion, or even just iteration, in any Turing complete language. I feel that tabling goes a long way towards addressing incompleteness.

This leaves the issues of the cut and arithmetic. Personally, I'm very comfortable with the cut, although I recognise it's an issue, especially for beginners - and there's some standard uses of the cut that are almost mandatory (e.g. in the first recursive clause in a predicate definition with two recursive clauses). For declarative arithmetic, there are solid libraries for CLP over various integer or real domains (Swi-Prolog has a few, the work of Markus Triska, who often also comments here). I didn't know about Oleg Kiselyov's work, so I'll have to look it up because it sounds intriguing.

I think that with such modern additions (i.e. apart from the cut) Prolog moves closer to the declarative ideal without sacrificing its all-around usability as a general purpose language. What do you think?

Finally, I'd like to know more about minikanrens' search. I'll look around on the internet, but is there a source you would recommend?


> In general, my opinion is that Prolog's approach is pragmatic, but it was designed in the 1970's when hardware was weaker. With modern hardware, there may be different options that better address the need for purity without sacrificing efficiency.

I agree. We are investigating various approaches to improve the efficiency of the search, while retaining completeness, for example.

> Regarding incompleteness, some modern Prologs use SLG-resolution (a.k.a. tabling) to at least avoid the incompleteness resulting by infinite left-recursions. This still allows infinite right recursions but it's easy to get in such a situation with recursion, or even just iteration, in any Turing complete language. I feel that tabling goes a long way towards addressing incompleteness.

I agree that tabling is useful. Tabling is useful even with a complete search, since it can cut off certain cases in which there are infinitely many answers of similar form, and can change the complexity class of queries.

A section of my dissertation discusses adding tabling to miniKanren, based on code Ramana Kumar and I worked on, after discussions and readings with Dan Friedman:

https://github.com/webyrd/dissertation-single-spaced

That tabling implementation is very tricky, though, and doesn't support constraints other than unification. It's time to revisit and update tabling in miniKanren, I think.

> Modern Prolog interpreters generally have an option to enable the occurs check (for example, in Swi-Prolog there's an environment flag to enable it for all unifications) and there's also the ISO predicate unify_with_occurs_check/2, that can be used in meta-interpreters, I guess. I think this addresses the soundness concern. > > This leaves the issues of the cut and arithmetic. Personally, I'm very comfortable with the cut, although I recognise it's an issue, especially for beginners - and there's some standard uses of the cut that are almost mandatory (e.g. in the first recursive clause in a predicate definition with two recursive clauses). For declarative arithmetic, there are solid libraries for CLP over various integer or real domains (Swi-Prolog has a few, the work of Markus Triska, who often also comments here). I didn't know about Oleg Kiselyov's work, so I'll have to look it up because it sounds intriguing. > > I think that with such modern additions (i.e. apart from the cut) Prolog moves closer to the declarative ideal without sacrificing its all-around usability as a general purpose language. What do you think?

I also used to think that Prolog was moving closer to the delcarative ideal. And I suspect that most expert Prolog programmers believe similarly.

However, my attitude has changed in the past few years, after seeing Prolog programmers try to implement the relational interpreter from the 2017 ICFP pearl in Prolog.

The problem is one of composing the pure features needed for expressing a program as complicated as the relational interpreter. (Of course, the relational interpreter is designed to be as short and simple as possible, bit it is still more complicated than pure relations you will find in a Prolog textbook, for example.)

In theory, you can easily combine unification with the occurs check, SLG-resolution, disequality/disunification constraints, type constraints, etc., and avoid all impure uses of extra-logical features. In practice, I've seen people struggle to combine these features within a single Prolog implementation. In fact, after multiple attempts from multiple Prolog experts, I have yet to see a version of the relational Scheme interpreter in Prolog that can match the behavior of the miniKanren version.

I'm not claiming that someone couldn't implement a full relational interpreter in Prolog. Obviously they could, since Prolog is Turing-complete. I'm only saying that the default choices of Prolog, plus the intricacies of combining multiple pure features (and leaving out standard and useful non-relational features), seems to make writing a relational interpreter much harder than I would have expected.

If you are up for a challenge, I'd be happy to work on a Prolog version of the Scheme interpreter with you!

Based on what I've seen from Prolog experts trying to reproduce the relational interpreter, and conversations with Prolog experts, my current thinking is that Prolog is actually not a good language for relational programming. Too many of the defaults must be adjusted or overridden, too many standard techniques and idioms must be abandoned, and too many potentially incompatible libraries must be composed in order to get everything "just right."

The problem isn't that relational programming in Prolog isn't possible. The problem is that it requires enough work and non-standard techniques that in practice people don't do it.

At least, that is what I observe as an outsider.

If there is an easy way to combine features within a single FOSS Prolog implementation to allow for the types of relational programming we do in miniKanren, I'd be delighted to see it in action. I'd also be delighted to write a paper about it with you! :)

> Finally, I'd like to know more about minikanrens' search. I'll look around on the internet, but is there a source you would recommend?

I would recommend reading Jason Hemann and Dan Friedman's lovely little paper on microKanren, which gives a tutorial reconstruction of miniKanren's complete interleaving search, starting from depth-first search:

http://webyrd.net/scheme-2013/papers/HemannMuKanren2013.pdf

https://github.com/jasonhemann/microKanren

The standard exercise for learning microKanren is to translate the resulting ~50 lines of Scheme code into the language of your choice.

You might also like this more technical paper on the `LogicT` monad:

Backtracking, interleaving, and terminating monad transformers: (functional pearl) Oleg Kiselyov, Chung-chieh Shan, Daniel P. Friedman, and Amr Sabry. Proceedings of the Tenth ACM SIGPLAN International Conference on Functional Programming ICFP 2005, Tallinn, Estonia

https://dl.acm.org/doi/10.1145/1086365.1086390

http://okmij.org/ftp/papers/LogicT.pdf

Cheers,

--Will


The Gödel programming language from Patricia Hill and John W. Lloyd is, to me, a good example of what a Prology language would look like, if designed from the ground-up for relational programming.

I also suspect lambdaProlog would make for a nice foundation for relational programming, if the search were improved.


>> If you are up for a challenge, I'd be happy to work on a Prolog version of the Scheme interpreter with you!

Oh gosh, don't do that to me :) I just finished submitting my second paper and I have to write my thesis, plus another couple of side projects already and a hell of a lot of work on my Louise ILP system. Maybe we can talk about this again after March 2022? When I should be finished with my thesis?

Note however I may not be the best person for this job. I'm serious when I say I'm "impure". I mean that I'm not very interested in declarative purity. For me it's more like an ideal goal than something I absolutely feel I need. I suppose I've become all too comfortable with Prolog's er pragrmatic hacks, that I don't feel I need them gone.

That said, it's certainly an interesting challenge :)

>> The problem isn't that relational programming in Prolog isn't possible. The problem is that it requires enough work and non-standard techniques that in practice people don't do it.

I hope I don't misunderstand what you mean by relational programming. I think you mean pure logic programming, _not_ like implemented in Prolog, but as it's described, say, in Lloyd? Then yes, I agree. It is often infinitely easier to use Prolog's impure facilities, like the dynamic db and so on, to complete a complex piece of code. I guess the quintessential example is findall/3 and friends- impossible to implement in pure Prolog, without a dynamic db (er, far as I know).

In general, I find myself resorting to using impure features when I can't find an efficient way to write a program without them. For instance, sometimes I want to use random access data structures, over linked lists (so, Prolog "terms" with setarg/3 arg/3, rather than list traversal). Othertimes it's just so much easier to bang together a findall/3 with an arg/3 counter, than write yet another recursive skeleton with 4 accumulators- and an interface predicate to hide them.

To be honest, a purely declarative language that took those impure but convenient hacks away would take me some time to get used to. I don't know that I want to give those conveniences up. They're there for a reason- a tradeoff of purity for efficiency and usability. I, personally, find that tradeoff balanced and while I'm not against going the other way (sacrificing efficiency and ease of use for purity), as a research goal, I'm not going to go out of my way to try and achieve that.

>> I'd also be delighted to write a paper about it with you! :)

I am very flatterred :) But I'm letting you down, ain't I? I'll bring up Markus Triska again, as someone I understand to be a dedicated Prolog programmer, also dedicated to declarativey purity. Maybe he'be up to it.

Thank you for the recommended reading on minikanren's search!


> Oh gosh, don't do that to me :) I just finished submitting my second paper and I have to write my thesis, plus another couple of side projects already and a hell of a lot of work on my Louise ILP system. Maybe we can talk about this again after March 2022? When I should be finished with my thesis?

Very nice! ILP is extremely interesting, especially for program synthesis.

I'd be happy to talk again after your thesis!

> I hope I don't misunderstand what you mean by relational programming. I think you mean pure logic programming, _not_ like implemented in Prolog, but as it's described, say, in Lloyd?

Correct. In particular, I mean logic programming in a system with the following properties:

1) sound unification

2) complete search

3) all predicates work in all-modes, all the time

4) [optional, but extremely useful] additional relational constraints--such as disequality constraints--that work in all modes, support limited forms of negation, and can express some otherwise infinite sets of constraints finitely

> In general, I find myself resorting to using impure features when I can't find an efficient way to write a program without them. For instance, sometimes I want to use random access data structures, over linked lists (so, Prolog "terms" with setarg/3 arg/3, rather than list traversal). Othertimes it's just so much easier to bang together a findall/3 with an arg/3 counter, than write yet another recursive skeleton with 4 accumulators- and an interface predicate to hide them.

I, too, have played in the muddy world of impure-but-convenient / not-quite-logic programming. This world is similar in spirit to almost-functional-programming, with the ocassional mutation and unconstrained effect thrown in. Mutation is powerful and useful. Unconstrained effects are powerful and useful.

On the other hand, even a single use of these features can destroy a host of program invariants that could be used for reasoning, analysis, or optimization. Or for running programs "backwards." Or for semantics-guided synthesis...

The lesson I hope Prolog programmers will take from miniKanren is that they are giving up something very powerful when using these extra-logical operators, in these operators destroy the ability to soundly run predicates in all modes, and that sophisticated all-modes relations can perform computations that would be difficult to express in other ways.

> To be honest, a purely declarative language that took those impure but convenient hacks away would take me some time to get used to. I don't know that I want to give those conveniences up.

These features are very difficult to give up, similar to the way mutation and uncontrolled effects are difficult to give up when writing programs in a lazy functional language. There is no doubt that it requires a total change in programming style, giving up convenient and powerful idioms, and often having to rethink approaches to efficiency.

I remember the first time I heard about functional programming. After programming for 20 years, I had to re-learn the notion of 'variable'. (Just like I had to re-learn the notion of 'variable' when I learned logic programming.)

This is the main reason I think Prolog isn't suitable for relational (pure logic) programming. The language design, compilers, libraries, the WAM, books and tutorials, research papers, publicly available code, etc., assume non-relationality by default. If you want to program purely in Prolog, you have to know which parts of the language to avoid (almost everything!) and which libraries to avoid (most of them, as far as I can tell), in addition to knowing exactly which combination of optional features to use to enable relational programming. You won't find this described in any Prolog book or tutorial that I have seen, beyond the famous append` example, so you are on your own to develop a style of programming that is at odds with the ecosystem in which you work. It's the same feeling I get whenever I try to write purely functional code in a language in which mutation is the default, and in which almost all libraries use mutation.

Relational programming requires a different point of view, which is usually at odds with standard Prolog practice.

> They're there for a reason- a tradeoff of purity for efficiency and usability. I, personally, find that tradeoff balanced and while I'm not against going the other way (sacrificing efficiency and ease of use for purity), as a research goal, I'm not going to go out of my way to try and achieve that.

I agree--it's all tradeoffs! And I agree that it is such a different way of programming, with a totally different set of idioms, advantages and disadvantages, performance issues, etc., that you would have to go far out of your way to program in the purely relational style. And, of course, programming relationally may provide little or no benefit for what you want to accomplish.


>> 3) all predicates work in all-modes, all the time

I think this is the feature that is the most sorely missed in Prolog programming and as you say it is possible to have it, but only if one is prepared to jump through no end of hoops. Sometimes it's just not possbile with any reasonable amount of effort and any sensible amount of coding. This feature is also the one most characteristic of relations, rather than functions, or the demi-functions that Prolog predicate definitions end up as (i.e. with more or less standard inputs and outputs, a.k.a. modes).

I think I detect a hint of frustration in your comment, perhaps after one too many encounter with recalcitrant Prolog programmers, unwilling to give up their convenient hacks :)

Well, these days I can wear another hat, that of the ILP researcher, and there is an interesting use of modes in ILP: they are often used to reduce the size of the hypothesis search space. Because, of course, the hypothesis space for the program append(+,+,-) is much smaller than the hypothesis space for the program append(?,?,?)! Aleph, probably the best-known ILP system of all time (and its ancestor, Progol) has a special syntax of "mode declarations" to determine input and output modes, for this reason.

Interestingly, the more recent approach (attention: I'm about to talk about my PhD work :) of Meta-Interpretive Learning (MIL), does away with specific modes and Aleph and Progol's (and many other systems') mode declarations. Instead, MIL encodes inductive bias by means of second-order datalog clauses, the "metarules" (e.g. ∃P,Q,R,X ∀x,y,z: P(x,y) ← Q(x,z), R(z,y,X) is a metarule with second-order variables P,Q,R existentially quantified over the set of predicates, first-order variable X existentially quantified over the set of constants and first-order variables x,y,z universally quantified over constants). Metarules do not specify input and output modes -they are, after all, datalog. This would normally blow up the hypothesis space to unsearchable proportions, but this is managed in various ways- and an actual search of the hypothesis space (i.e. the set of sets of clauses) can be avoided alltogether, for a polynomial time learning approach.

But I digress. What I mean to say is that this theoretically mode-free learning has so far only been implementedin Prolog [1][2][3] or ASP [4]. ASP is purely declarative, but incomplete in its own way (because the Herbrand base must be fully ground, which is impossible to do for infinite Herbrand bases) and Prolog has- the mode problem we discussed. So while we have a powerful learning approach, with a nice theoretical basis, in practice we implement it in a less than satisfying manner, always. And, I dare say I've seen the result in annoying "edge cases". So, after our discussion, I wonder if a minikanren implementation of MIL would overcome the limitations of the Prolog and ASP implementations - even at the cost of expensive occurs-checks etc. I mean, at least if one gives up efficiency, there should be an equally important reward, and an in-practice realisation of the in-principle soundness and completeness of the approach would probably be that. In my opinion anyway. At the very least it's a more compelling reason to re-implement a system in a new language than "well, it's an implementation in a new language".

Food for thought, post-thesis :)

____________

[1] Metagol:

https://github.com/metagol/metagol

(Try to ignore the scary header)

[2] Thelma:

https://github.com/stassa/thelma

[3] Louise:

https://github.com/stassa/louise

[4] Hexmil:

https://arxiv.org/abs/1805.00068


Did anything ever come out of Barliman or other attempts to make an interactive interface around this stuff?

It's such a cool concept, I'm surprised I never saw a mainstream IDE where I can write rough constraints or examples and have it generate code, just like in those really nice examples of minikanren from a few years back.


This paper and prototype educational system was based on Barliman:

`Towards Answering "Am I On the Right Track?" Automatically using Program Synthesis` Molly Q Feldman, Yiting Wang, William E. Byrd, François Guimbretière, & Erik Andersen. SPLASH-E 2019 https://mollyfeldman.github.io/splashe2019.html

This paper was at least partly inspired by the relational interpreter approach that is the foundation of Barliman:

`From definitional interpreter to symbolic executor` Adrian D. Mensing, Hendrik van Antwerpen, Casper Bach Poulsen, and Eelco Visser. Proceedings of the 4th ACM SIGPLAN International Workshop on Meta-Programming Techniques and Reflection (META 2019) https://dl.acm.org/doi/10.1145/3358502.3361269

As for Barliman, we took a step back from imrpoving the interface, and started doing experiments with a bunch of colleagues on various ways to improve synthesis speed and expressiveness.

Until COVID-19 got in the way, Kanae Tsushima and I were beginning to put some of the pieces back together in an improved Barliman, named `新-Barliman` (`Shin-Barliman`, `shin` meaning "new" in Japanese):

https://github.com/k-tsushima/Shin-Barliman

I'm hoping we will make some real progress again soon!


I'm really glad to hear that! That's for contributing to produce this really cool work.


Hi Will! Your talks really left a mark on me and I’ve been playing around with relational programming stuffs ever since. Thank you!

While you’re here... Has there been any progress on support for rational numbers (or other number systems) in pure minikanren? I’ve always wanted to have a crack at it, but have no idea if it has already been done.


Hi,

thanks for spreading ideas and doing videos. What's new in the relational field ?


Hi Will,

Do you any plans for writing a book on MiniKanren for Enterprise programmers?

- Kamaal.


Hi Kamaal!

I know Cisco is using core.logic, which is David Nolen's Clojure variant of miniKanren, in their ThreatGrid product. I think the Enterprisey uses of mediKanren are a bit different than the purely relational programming that I find most interesting, though.

Having said that, we are now on our second generation of mediKanren, which is software that performs reasoning over large biomedical knowledge graphs:

https://github.com/webyrd/mediKanren/tree/master/medikanren2

mediKanren is being developed by the Hugh Kaul Precision Medicine Institute at the University of Alabama at Birmingham (HKPMI). HKPMI is run by Matt Might, who you may know from his work on abstract interpretation and parsing with derivatives, or from his more recent work on precision medicine. mediKanren is part of the NIH NCATS Biomedical Data Translator Project, and is funded by NCATS:

https://ncats.nih.gov/translator

Greg Rosenblatt, who sped up Barliman's relational interpreter many order of magnitude, has been hacking on dbKanren, which augments miniKanren with automatic goal reordering, stratified queries/aggregation, a graph database engine, and many other goodies. dbKanren is the heart of mediKanren 2.

I can imagine co-writing a book on mediKanren 2, and its uses for precision medicine...

Cheers,

--Will


As an MD, I would be extremely interested in such a book! I'm a big fan of yours :-)

I'm still working on my adaptation of MiniKanren for unification over multidimensional array terms, BTW. I hope to get it really working, someday.

Cheers!




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: