Hacker News new | comments | ask | show | jobs | submit login
A gentle introduction to Prolog (2013) (bernardopires.com)
221 points by mihau on Oct 8, 2016 | hide | past | web | favorite | 128 comments

I studied Prolog in college (3rd CS year course, 2003).

I concluded it didn't deliver on its promise.

The promise was that, as in the article, you 'Say what you want, not how you want it done.'

In practice, once you start writing non-trivial programs, you run into situations where they take ages to run, as Prolog does its search in the background.

So, you have to start using all these languages features to control and optimize the search, such as: https://en.wikipedia.org/wiki/Cut_(logic_programming)

And you end up spending a lot of time reasoning about the how that it said you didn't have to care about.

I concluded that, as a result of performance issues, the core Prolog abstraction 'leaks' too frequently for it to be worthwhile. (https://en.wikipedia.org/wiki/Leaky_abstraction)

Its a great promise, if a smarter language or implementation could achieve it, but overall I concluded that if I was going to have to spend all this time reasoning about the how anyway, I might as well just deal with that up-front, rather than after-the-fact.

I'd love to hear a contrasting opinion (I got pretty good at it, but only over months, not years, of use). But I suspect this is the reason why Prolog didn't make it. It looks good at the start, but it doesn't deliver.

IMHO, Prolog is the leading example of a language you learn for the brain-twisting, and not for actual usage. Even moreso than Lisp.

Full-blown end-to-end Prolog programs in production are vanishingly rare, and for good reason. But much more useful are the Prolog-inspired chunks of code dealing with a separable part of a larger problem. Logic-based programming is an appropriate tool for some problems, and it can be applied even in the context of procedural (of course, it's easier in languages that support DSLs).

As an example, clojure's core.logic seems to be pretty popular and that's basically just a subset of Prolog. Somewhat counter-intuitively, Prolog-inspired DSLs tend to more straightforward because they're missing some of the advanced features that Prolog needs to make it usable. The DSL can rely on the host language to do some of the dirty work for it.

Actually, less than you might think. The Android OTA recovery environment is written in prolog, last I remember.

Same thing happened to me, also right when I encountered "cut". However, rather than considering Prolog a leaky abstraction, I started thinking of it as a concrete implementation of a particular kind of useful search algorithm. If you learn how the core algorithm works, you can use it in the same way you'd use a regular expression library to solve particular types of problems.

This is the correct approach. It excels in certain domains. If you need something that is suspiciously close to some kind of rule engine then prolog is the way to go.

>> If you learn how the core algorithm works, you can use it in the same way you'd use a regular expression library to solve particular types of problems.

Aye, very good advice- also, keep in mind Prolog is essentialy Turing-complete pattern-matching, so like regular expressions but with the ability to describe any program you can hope to compute (and, er, some you can't- see infinite recursion).

>> The promise was that, as in the article, you 'Say what you want, not how you want it done.'

That is the promise of declarative programming, in general. Prolog is a logic programming language that is also declarative. Its primary "promise", insofar as it makes any promises, is to allow the use of first-order logic as a programming language. In that, it delivers in spades, more so than any other language (and even other logic programming languages are largely based on Horn clauses).

The reason Prolog is held up as an example of declarative programming is because it's one of the best realisations of that paradigm- or in any case it's one of the few such languages one can do actual programming in (as opposed to, say, xml or html).

Prolog's name however stands for "PROgrammation en LOgique", not "PROgrammation DEclarative".

In that sense it's pretty unfair to blame Prolog for not being a perfect declarative language, or for not keeping any other promises it never made.

In my opinion, Prolog is a horrible programming language. BUT... The idea of logical programming has much promise. Not for programming, but for running queries on a knowledge base. In fact, unification / lambda calculus is arguably the only way you can make complex logical deductions without a brute force approach (you could also use machine learning, but that would not scale for queries with many variables).

The only problem is, the domain in which this is useful is fairly small, and requires an explicitly defined knowledge base, whereas things like machine learning are very flexible and cover a broad variety of domains pretty well. One area I think it holds promise for is in games - imagine characters being able to make deductions about the world around them, and react appropriately - beyond things like behavior trees (which also do not scale once you have many complex rules interacting).

So...in my opinion it is a little bit like SQL. You would not use SQL to write an application (I hope!).

The idea of logical programming has much promise. Not for programming, but for running queries on a knowledge base.

You might want to take a look at Datalog [0]:

Datalog is a declarative logic programming language that syntactically is a subset of Prolog. It is often used as a query language for deductive databases. In recent years, Datalog has found new application in data integration, information extraction, networking, program analysis, security, and cloud computing.

An example of this in real-world use would be in Rich Hickey's Datomic [1]:

Datomic is a distributed database and implementation of Datalog on Clojure. It has ACID transactions, joins, a logical query language—Datalog.

[0] https://en.wikipedia.org/wiki/Datalog [1] https://en.wikipedia.org/wiki/Datomic

I've tried to get started with datablog a few times, but haven't yet been able to find a useful, Free/open software system that seems practical and has a good practical/hands-on tutorial?

I'm thinking there ought to be something with usefulness comparable to sqlite or at the very least Berkley DB? But I've yet to find it.

>> So...in my opinion it is a little bit like SQL. You would not use SQL to write an application (I hope!).

Not a bad hunch, but SQL is in a way only half of Prolog.

SQL relations (tables) can be seen as first-order logic predicates that are always true.

Prolog has those (they're called unit clauses, or "facts") but also has relations that are actually implications (called definite clauses, or "rules"). "Rules", being conditionally true (or not) allow programming logic to be described. [1] [2]

The funny thing is that in Prolog both facts and rules are the same kind of thing (a special kind of first-order predicate called a Horn clause). In, er, fact, so are the queries you write at the command line (negative clauses which are always false, so that you can prove them true by refutation, binding variables in the process; this is how you "retrieve data" in Prolog, where your program is your data).

In this sense Prolog is very different to SQL, which has a clunky syntax on top of a neat abstraction- in Prolog you program with the abstraction itself so, yes, totally: in Prolog you use the database to write an application. Except it's not a SQL databse.

[1] https://en.wikipedia.org/wiki/Horn_clause

[2] http://mathworld.wolfram.com/HornClause.html

I agree - I did not clarify very well, but what I meant is that it is like SQL in that it is probably best utilized with an accompanying language. :)

I imagine you never had the fortune of using Oracle Forms or its replacement Apex.

I have not :)

It looks good at the start, but it doesn't deliver.

I think that it depends on the domain. Part of my PhD project was making a natural language generator for a Dutch grammar/lexicon. The grammar was developed for a high-coverage parser. Due to Prolog's declarative nature, the grammar, lexicon, and productive lexicon could be used with very few changes. (E.g. in productive lexicon, for parsing one would derive a root and lexicosyntactic information. In generation, you start with a root and underspecified lexicosyntactic information and derive a word with its full lexicosyntactic information.)

One could argue that Prolog was primarily used for data (grammar rules, lexicon). And it is true that people have developed systems that target unification grammar specifically. But in this domain, Prolog gives you a good local optimum: feature structures are trivially converted to Prolog terms, feature structure unification becomes Prolog term unification, Prolog's metaprogramming facilities make it easy to compile/rewrite grammar rules (e.g. in order to exploit first-argument indexing in particular scenarios).

That said, I didn't and wouldn't use Prolog outside that domain and haven't written Prolog since. Also, I think Prolog has serious deficiencies, e.g. it can be quite tricky to write good steadfast[1] predicates.

[1] http://permalink.gmane.org/gmane.comp.lang.swi-prolog.genera...

>> One could argue that Prolog was primarily used for data (grammar rules, lexicon).

Weeell, more to the point: in Prolog data is your program so anything you do is data manipulation (or program execution, depending on how you choose to think about it).

The question is though: had we invested all that time and money of imperative programming and object-oriented, in Prolog too, would it still be the case of not delivering?

Do we include in our thinking the full potential of the language? I am thinking this is what happened with functional programming too. Only lately we understand its true potential, starting with some functional concepts in Python and JavaScript in the early and mid 90s and continuing with ES6, Rust and Swift.

There has been quite a lot invested in Prolog and its successors. Eg. https://en.wikipedia.org/wiki/Fifth_generation_computer

It's just fundamentally much harder to get anywhere than with procedural languages.

>> It's just fundamentally much harder to get anywhere than with procedural languages.

It's harder to learn than procedural languages. Once you know what you're doing it's much easier to get anywhere with it.

I meant that designing and implementing good declarative logical languages is an open research problem, because logical inference is basically unsolved. Procedural languages are trivial, in comparison.

LINQ and the C# compiler (with pattern matching proposals in 5.0, I think, but not yet implemented) is inching closer to some of Prolog's ideas. In that sense there is a bit of a risk of implementing a "buggy, slow" Prolog in the process of re-inventing its wheels.

Well, if you didn't say how you want it, you can't really complain how slow it is either. I was under the impression it's slow all along, but it might have other merrits, especially for a student rather than an enterprising dev.

Prolog was slow in the '60s.

Not anymore [1]. It is, after all, an in-memory, No-SQL database. Even "slow" ish Prologs are pretty fast these days [2].

If your program is running slowly you need to take care with the way you've written it, but that's true for any language. It's more the case with Prolog and it's harder to optimise because it's harder to get your head around it, but beyond that, it's not "slow" in any sense of the word, particularly when you take into account what it is that it does.

[1] https://www.dcc.fc.up.pt/~vsc/Yap/

[2] http://www.swi-prolog.org/pldoc/man?section=jitindex

At last, I have something to add to a HN thread.

I used prolog professionally quite a bit back in the early 2000s. IBM had a product, Tivoli Enterprise Console, that was used to filter, aggregate, and correlate events from their monitoring toolset at the time. All of the rules for event routing and filtering were written in an extended version of prolog.

The system was quite powerful at the time, and I found rule writing to be really intuitive once I wrapped my head around the concepts.

IBM eventually sunsetted TEC in favor of a similar product they acquired when they bought Netcool, but I'll bet there are still a few organizations out there still using it.

I had a Prolog course at university; it was interesting, but... very weird. (There's a joke that writing a compiler in Prolog is trivially easy; but writing a compiler in Prolog that does anything other than say 'No.' when given an invalid program is infeasibly hard.)

I generally found that while getting Prolog to do cool things was fairly straightforward, getting Prolog to do cool things quickly involved knowing precisely where to put cut operators, and that was basically black magic. But it's been a while.

BTW, our Prolog lecturer wrote a theorem prover in it. That's pretty straightforward; we all wrote theorem provers in it as an exercise; but his theorem prover had a polished MacOS GUI, and that was also written in Prolog. He was writing low-level MacOS stuff in raw Prolog. To this day I still have no idea how.

>> getting Prolog to do cool things quickly involved knowing precisely where to put cut operators, and that was basically black magic. But it's been a while.

To add to tom_mellior's much upvoatable reply, I find that the cut is much less painful if you keep your predicates small and mind separation of concerns, as you should anyway.

A good recommendation about when to use the cut operator is "as soon as undesired nondeterminism happens". With short and to-the-point predicates this is much easier to do.

Finally- the Prolog community has a term for two types of cut, the "green cut" and "red cut". A green cut is a cut that simply stops non-productive backtracking, which checks subsequent clauses and fails. A red cut is one that has the potential to significantly change the behaviour of the program by stopping subsequent clauses from being executed even when they could be true.

[1] Frex, see this very nice resource: "Coding Guidelines for Prolog", Covington, Bagnara, O'Keefe, Wielemaker and Price. Online here: https://arxiv.org/pdf/0911.2899.pdf

> There's a joke that writing a compiler in Prolog is trivially easy; but writing a compiler in Prolog that does anything other than say 'No.' when given an invalid program is infeasibly hard.

I realize you said this was a joke, but still... A compiler in Prolog might look like this:

  compile(Input, Output) :-
      parse(Input, AST),
      process(AST, Output).
and this does, indeed, just answer "no" or "false" when it fails. But you can improve it by just doing something like this:

  compile(Input, Output) :-
      (   parse(Input, AST)          % if parsing succeeds...
      ->  process(AST, Output)       % ... then process the AST...
      ;   writeln('syntax error')    % ... otherwise report an error
Or you can just use exceptions, which also exist in Prolog. Of course you'd need to collect data (line, column, expected next construct) during parsing in order to make the error message nicer.

> I generally found that while getting Prolog to do cool things was fairly straightforward, getting Prolog to do cool things quickly involved knowing precisely where to put cut operators, and that was basically black magic.

The cut is indeed complex, and for historical reasons it's taught too early and emphasized too much by almost all Prolog classes and books. For example, most resources would tell you to write the example above as:

  compile(Input, Output) :-
      parse(Input, AST),
      !,    % commit to this choice if parsing succeeded
      process(AST, Output).
  compile(_Input, _Output) :-
      % we get here by funky implicit control flow if parsing failed
      writeln('syntax error').
It's not that hard to understand the cut once you have a good understanding of normal Prolog execution, but yes, that has to come first. And once you understand how it works, you can often find better solutions that do not use the cut.

As for writing "low-level" code like GUIs, you just use your Prolog system's foreign function interface....

As I say, it's been years. I don't recall -> and ; at all. (Was there ever a time when they weren't in the language?) They look much nicer than cut and the implicit control flow.

I am, actually, right now struggling with priority-and-constraint-based register allocation and instruction selection for a compiler backend. It is so the right kind of problem for Prolog.

There's nothing funny about ';': it's just 'or'. Combined with '->', which means 'then', it essentially comes to behave like an 'else'.

';' has been in the language since the beginning, and I'm pretty sure '->' was too, but can't say for sure.

I think -> is newer, but it's in ISO Prolog, which came out in 1995. So it's rather likely that the OP learned Prolog at a time when -> existed but teaching resources had not caught up. Not sure if that's the case even now.

I learned Prolog back in 1997, so it may have existed in the implementations I used back then. Must see if I can find an implementation of HU Prolog from back then. I also used SWI Prolog from back then, and I'd expect that was more up to date.

Edit: mention SWI Prolog.

The SWI-prolog website is written in prolog.

Not the only IBM product to use Prolog, either. A large chunk of the relationship extraction code for the Jeopardy! Watson DeepQA system was written in Prolog as well[1]. Obviously something of a natural fit for the language.


[1] http://ieeexplore.ieee.org/document/6177727/, if you've got an ieee subscription

I've seen multiple examples of prolog over the years. From looking at them, one gets the idea that you must hardcode all your data into your program and the only UI is a somewhat odd shell, with no way to actually send input in or get output out. It gives the impression of a neat idea that is a cool academic toy but not a practical language for real-world use. I think that it would be helpful if more prolog intros had a simple example that read a CSV file or JSON object or something, processed it using the defined rules, and output the results to text or HTML file. That's a minor thing, but I think that it would give a better impression and get more people interested and trying it.

Of course intros to other languages may gloss over I/O as well, but with mainstream imperative languages (that aren't oriented to defining and querying data), it's more well known or assumed that the standard I/O patterns apply.

>> I think that it would be helpful if more prolog intros had a simple example that read a CSV file or JSON object or something, processed it using the defined rules, and output the results to text or HTML file.

You're right about that and I don't think it's a minor thing. One reason is that most tutorials have a hard enough time going through everything you need to know before you can start parsing json and writing out to file, without getting stuck in infinite recursion etc. so the nitty-gritty of day-to-day programming work is often left as an exercise for the reader.

On the other hand, I have the feeling that many people check out Prolog to see how it's different then go looking for the way to do the usual things in it. This may be the wrong approach. Prolog has some advanced features and those are "the whole point" about the language. If all you'll ever need to do is process some csv, there's probably no point in going through all the hard work to learn Prolog- you can do it in another language rather more easily.

That said, the Swi-Prolog homepage has a link to a good tutorial on creating web applications with it that I recommend to anyone interested in that sort of thing:


It's under the "Tutorials" header. Unfortunately my phone's ISP is blocking it so I can't link directly to it. Harumph.

I used to feel similarly about lisp - for some reason many authors seem(ed) enamoured by the limits designed into lisp systems in the 70s. A little like teaching someone today, to program BASIC on a commodore VIC 20 with a tape drive. Sure, it's BASIC and it is programming - but it's a little strange to pretend it's modern BASIC programming.

As for Prolog, I get the sense that it is implied that it is an "information/data language", and it is thought a little like how SQL is taught - only, you're only given the SQL parser and building the clustered, relational backend is left as an exercise to the reader, as well as bulk import/export and backup/restore systems.

Here's the link ("Prolog Web Applications"):


That's just a short collection of resources, not a tutorial. The actual tutorial linked from http://www.swi-prolog.org/ -> Tutorials -> Web applications is http://www.pathwayslms.com/swipltuts/html/index.html

This appears to be limited to explaining the server side, which is only a tiny fraction of actual web applications. In practice, the client side is often equally important, or even more so than the server side.

And an example of a Prolog web application would be Swish, the online prolog IDE: http://swish.swi-prolog.org/

Thanks. I'm supposed to call my ISP and tell them I'm over 18 to access the site, apparently. Can't even begin to fathom why the what.

Prolog is a really good query language, in terms of expressiveness.

However, once you start bolting on side effects, like I/O and asserting/retracting facts in the middle of queries, all declarativeness flies out the window and you're left trying to explicitly juggle the implicit branching and control of the program counter to try to keep the system sane, bringing in more and more of the complexities of imperative programming. Optimization also basically requires wresting control of the program counter back from the abstract system into explicit programmer control.

It certainly works as part of practical real-world systems, but I wouldn't necessarily want to write a large, behavior-heavy application purely in Prolog. For large in-RAM data, but light on behavior stuff, fine. There are certain data analyses that are much faster to develop in Prolog than imperatively, and going multi-language can save a lot of programmer time compared to writing everything in $TRADITIONAL_LANGUAGE purely out of inertia.

>> Optimization also basically requires wresting control of the program counter back from the abstract system into explicit programmer control.

I'm unsure what this alludes to. Are you talking about minding tail-call optimisation? That's not a feature of Prolog only.

>> going multi-language can save a lot of programmer time compared to writing everything in $TRADITIONAL_LANGUAGE purely out of inertia

A better idea is to chuck out most of your stack and write most of your application in Prolog itself- no xml, no having to worry about Object-Relational Impedance Mismatch, and so on. You don't even need regular expressions, because writing a compiler in prolog with Definite Clause Grammars is a piece of cake (something that can't be said for any other language under the sun).

For instance, see "Can I replace a LAMP stack with Prolog?", here:


>> I'm unsure what this alludes to. Are you talking about minding tail-call optimisation? That's not a feature of Prolog only.

Many of the optimizations I've seen are ensuring that no branches are explored that the programmer knows are irrelevant once certain unifications have succeeded, and to control the ordering of branch exploration to try to find faster answers first. Indexing is also manual, and while I haven't looked in a few years, I haven't seen any JIT-style heuristic feedback that lets the system optimize itself for the runtime conditions of Prolog programs.

In my definitions, which some certainly argue against, declarative means you let the CPU do "whatever it takes" in the background, while imperative has you directly managing where the program counter goes, in what order, including explicit thread management (which Prologs that I've seen make you do as well).

>> Many of the optimizations I've seen are ensuring that no branches are explored that the programmer knows are irrelevant (...)

OK, you're talking about the cut here. I've made some more comments on it above, but basically I don't really see what the problem is, with it. It strikes me as more of a theoretical point to have to worry about.

Prolog takes a practical approach to both logic programming and the whole declarativeness thing. That is to say, if you can't have efficiency, you sacrifice some purity so that you can have a language that lets you do work. The cut, along with other program flow control predicates (->/0, ;/0 etc) and predicates with side-effects (write/1, get0/1 etc) are normally discussed under "extra-logical facilities" or some such section of textbooks.

We all all understand first-order logic inference is intractable, but we want to run it as a programming language so concessions must be made to material reality. Same goes for every other language- computing efficiently is one half of programming computers (untangling the mess you make is the other half).

This is a very old line of criticism and I don't really see the point of it anymore- not after having learned how to use the cut without hurting myself. So maybe it's just a problem in principle, because in practice it's not such a big deal.

>> Indexing is also manual, and while I haven't looked in a few years, I haven't seen any JIT-style heuristic feedback that lets the system optimize itself for the runtime conditions of Prolog programs.

Swi-Prolog has JIT clause indexing:


The first argument of coumpound terms is indexed, as long as it's atomic- so you do need to understand indexing to make best use of it. I mention a recent case were I made a mess and then untangled it once I realised how I was using indexing wrong.

So, you need to know some programming, right? I don't see that as a bad thing.

>> explicit thread management (which Prologs that I've seen make you do as well).

For multithreaded applications, that's true. I don't know how much of a problem is that in practice because I've never really had to use the facilities, but at least you got immutability to protect you from the usual pain.

>> So, you need to know some programming, right? I don't see that as a bad thing.

These concessions are a bad thing if the goal is to work in a new model of programming distanced from the complexities and bugs of imperative programming. These concessions bring back the same class of bugs and problems, exemplified by "oh, you just have to learn how not to have your program self-destruct by using this low-level operation incorrectly" in a declarative language.

I'd have no problems with replacing replacing green cuts and implicit ordering with separate non-invasive hint declarations, instead of exposing low-level backtracking munging operations which can be used in odd and destructive ways (and we in fact did so in our internal prolog-inspired languages). The presence of these types of operations and constraints also means the compiler often can't reorder clauses if it can (statically or dynamically) determine that one ordering is better than another.

Taking a mostly declarative language and throwing in cut, inline io/assertions/retractions when backtracking can occur, and demanding the execution follow the lexical order of all clauses is basically like throwing raw C pointers into Java: It seems to go against the language's design intent. It's a hack to get things going back when computers were too small & slow to have smarter compilers, and these hacks bring with them otherwise completely unnecessary problems. I don't think they should be defended, but viewed as warts that hopefully the language can design past in the future.

I've definitely gotten the sense that Prolog would make a great library or extension to another, more traditionally-paradigmed language. In the same way that LINQ has received a lot of praise as an extension to C#.

In a way, that's what https://en.wikipedia.org/wiki/MiniKanren is -- a backtracking logic mini-language that you embed in another language.

That looks like the Haskell tutorials and trainings I've seen.

At the end of it: 1) you didn't learn to read or write text to the console. 2) You are unable to make any useful program, let alone a hello world.

This kind of non-practical training materials is a recurring issue with functional languages.

That drives me up the wall too at times.

It's a recurring theme when teaching languages to do it through something the author (maybe implicitly) consider a good fit, something that emphasise the beauty of that particular language.

Might be good while you are learning your first language, maybe ?

This explains the IO/side effect heavy imperative tutorials, the structure rich OO tutorials, the 'algorithmic' FP ones, etc. Turned upside down quite clearly tells you which language is strong in a particular area, and complicated/awkward in another.

Myself, I prefer to start with the ugly parts, they contain much more information about the tradeoffs and focus of any particular language.

Well, it never claims to teach you all of the language; it tries to show off some neat parts. Articles like that often don't go into the details of I/O. Especially articles about languages with REPLs, which have much less need for that since they allow you to interact with your program more directly. (But it's true that many other Prolog resources also ignore I/O.)

I'll add my default recommendation that I provide in every intro to Prolog thread: The best book to learn is "Prolog Programming for Artificial Intelligence". The intro part is great and the AI parts are also excellent. I think "Art of Prolog" and "Craft of Prolog" should be next in that order.

I fully support the choice of SWI-Prolog by the author of the post. SWI also has good interfaces to other programming languages, I've only used the Java-bridge though but I think Prolog mostly shines if you mix it with another language. I tihnk a good next step would be researching definite clause grammars and constraint programming as those are the (imo) best use cases for Prolog. If you're mostly interested in constraint solvers I recommend eclipse (not the IDE, very unfortunate naming): http://www.eclipseclp.org/

I think Prolog is a great language to train your brain if that makes any sense. It's quite nice to write a Soduku solver and the like in it even though you can get better performance in other languages. Whenever I write Prolog it takes a bit for my brain to readjust which I feel is a pretty good sign as it means you're working in a different paradigm.

For those interested in AI I would add "AI algorithms, Data structres, and idioms in Prolog, Lisp and Java":


In fact I'd recommend that to anyone, regardless of interest in AI- it's a great programming textbook all around and accessible to anyone who knows at least one of the three languages (coughjavacough).

A little known fact is that Microsoft actually used an embedded Prolog interpreter for network configuration in Windows NT:


And its first version was developed on a Macintosh, too! Thanks for the link. Been some good history lessons on Microsoft recently that keep tripping me out.

Personally, Prolog was the biggest "woah" moment for me in learning programming languages.

If anybody wants a challenging and enlightening programming exercise, I recommend writing your own prolog[esque] interpreter.

Surprisingly it still has not been used much in games (well, actually not that surprising given how time consuming and risky it is). I made a stab at [1] but ended up taking more time than I had budget for. One thing I suspect may be using it is the game 1935 [2], although they are vague on the details.

[1] https://www.youtube.com/watch?v=2EHKDP2_ky0 [2] https://mollyrocket.com/news_0023.html

I learned Prolog in some university courses but I'm gutted that I haven't had a single chance to use it in my professional life. I've given some thought to what kind of projects would it be suitable for.

The other day I was playing Sid Meier's Civilization and realized that the rule engine for games like that would be a very good fit for a Prolog-like logic programming language (using an embedded interpreter). Prolog would allow adding new rules without touching the old ones, e.g. add a new rule for, say, defensive bonus would be just a single new line in the code base.

Prolog is very useful for decision and optimization problems. There are a ton of those available in the world. For example a lot of scheduling problems are relatively tractable with logic programming and a total bear in imperative/oo

On the other hand, I have first had experience of two companies who used C to solve these problems due to performance and maintenance of the logic. One in classroom scheduling, the other in hospital staff scheduling. Both of these are some of the examples most cited as useful. I still have The Art of Promoting on the shelf next to me. Fun but useless.

Unification + backtracking is really, really, really really cute. Makes you think very abstractly, so much that it made me feel that haskell was verbose (not trying to be hyperbolic here).

A nice book I've read recently is Ivan Bratko's https://ailab.si/ivan/novice.php

Try to grab a 2nd hand one or a library with it on shelves (probably stacking dust). The basics are covered, then graph search, state space, tricks like diff-list (pretty unforeseen use of prolog semantics tbh), a bit of nlp.

Very* cool

Neat tools. Have you seen any attempts to model HOL in FOL for verification in FOL? There's a FOL down to assembly verified prover and stack. It's ACL2-like. I also found a paper doing HOL-to-FOL conversion with that tool modeled and proven in HOL. Doing HOL in FOL or HOL-to-FOL converter in FOL seems like a short-cut to verified, proof stack.

If the shortcut is doable at all. The HOL-to-FOL gave me some excitement about this. It just wasn't done entirely in FOL itself.

Haven't seen it, but I'll check it out. Thanks for the pointer.

Forgot to add the link so you can see what such a process might look like. Here's slides and a paper on one that merged HOL and ACL2. But it verified that in HOL4, not ACL2.



I understood none of that FOLdeROL

It was for an audience that would. The rest would be unlikely to be helpful. Filtering trick. ;)

Anyway, here's the work the result will build on whose description is useful to wider audience:



Hey, pretty neat--thanks

Ha nice, I was looking for recent advances in this branch; most of things went the CPL way.

Check out Mercury programming language if you haven't. It's like a better, faster Prolog. Been around quite a while.


I think I read that Prince XML (a high end PDF creation tool) is written in Mercury, and that Mercury is somewhat fast (anecdotal).

Ah the Ivan Bratko book was the textbook at Uni... gosh 13 years ago. I loved doing Prolog and once the penny drops it makes you feel very clever thinking about things differently.

I'm not certain it's useful in the real world, but I did write a solitaire solver in it that could work with any number of decks and consequent stacks.

I find it extremely refreshing to wander around a new paradigm. After some webdev and even FP fatigue, logic variables really opened some perspectives on things. Actually, it felt like an extension of FP; imperative are memory -> memory, FP would be thought as one path of expression -> env -> expression through an evaluation graph walk; prolog is breadth search of the whole graph. It blends programs with space in an interesting way, it did alter my mind a fair amount.

I am always puzzled why Curry which is Haskell + Prolog in one unified packet has never taken off. Combining the advantages of functional and logic programming paradigms seems to have so many interesting applications.

Admittedly, Curry is an academic language, not 100% production ready.


Mercury is another programming language that combines Prolog plus functional programming that probably doesn't get as much attention as it deserves https://mercurylang.org/

With a build system like that, it gets exactly as much attention as it deserves... D:

Do a ./configure --help, and try to figure out what the best "grade"(s) to build is/are.

I've been meaning to try Mercury for years, but after this, there's no chance I'm going to try writing anything in it. What other crufty unusable insanity have they built in there?

It is a bit obscure. I let it build the ones it wants to build by default. This gives me the flexibility of deciding later when building Mercury programs what I want as a backend.

This blog post describes some of them: https://adventuresinmercury.blogspot.co.nz/2011/11/making-gr...

Even more interesting is that Prof. Dr. Hanus, one of the two original designers of the language, teaches both Haskell and Prolog in his lecture, but not Curry.

(And Dr. Huch, another person from Hanus’ team, who held that lecture this year, also didn’t mention Curry in it.)

He did reply to a mail of mine a couple of months ago though. Curry is still in development.

It’s just weird – I’m taking that course right now myself, exam on the 14th – that it was never mentioned.

I really don't know. Can you ask him? This was my mail and his reply:

> I am curious, is there a potential date for release of version 1.0.0 > of the Curry programming language in the near future, ie to be > considered production-ready and feature-complete?

Although we are working for years on Curry, I am careful to release a 1.x version. In the last years, the design stabilized a lot, in particular with the recent switch to 0.9. However, since Curry intends to be an extension of Haskell, one big thing is missing: type classes. If type classes are added, I think it could be called 1.0. However, for this purpose, we need serious implementations of type classes. We made one approach some time ago but it needs a lot of effort to get it ready with all the tools that exist so far. Now, we are working on another approach to add type classes that might be finished in the next year. Since the addition of type classes breaks (the type signatures of) existing code, it is not a simple extension.

Sorry that I can't be more specific. At least, we are working on it, but our capacity is limited :-(

Best regards,


I feel like people are somewhat aware of prolog but it's cousin datalog that's a db query language is much less widely known. I don't have too much experience with it but it seems like one of the technologies of the past that's superior to current alternatives but for some reason didn't take off.

The ideas of datalog are making a slow comeback. For example, datalog is a query language for Datomic DB.

Datalog is also quite actively used in ontology based data access where Datalog rules are used as mappings between databases and ontologies (see e.g. [1]). Variants and extensions of Datalog is also research actively in the field of answer set programming[2], with many usable implementations.

[1] - http://ontop.inf.unibz.it/ [2] - https://en.m.wikipedia.org/wiki/Answer_set_programming

Does your nick has anything to do with Burzum?

I'm aware of Datomic but I feel like it will be a while before it becomes "the standard".

We use Datalog every day in a React-Native application, via the datascript db (https://github.com/tonsky/datascript). Has been a very positive experience so far.

Thinking in "facts" / binary relations is itself a refreshing approach.

The Datomic database is a great datalog implementation

Is Prolog good for assignment problems (assigning guests to rooms or people to beds) when there are soft constraints?

I asked twice on StackOverflow [0][1] and had the question declined each time.

[0]: http://stackoverflow.com/questions/36148764/weak-rules-in-pr... [1]: http://stackoverflow.com/questions/37610341/assigning-people...

Prolog normally finds a solution, and if you decline it, a new solution, and so on until it can't find any more solutions. If you pay careful attention to your program, you can make it so that it finds ones that satisfy soft constraints first, and then solutions that don't satisfy them.

This is what the replies in [0] suggested. Prolog isn't exactly intended for preferring some solutions over others, but it has predictable solution generation that can be used to prescribe priority.

That's a lot of mental work, but the code is succinct. If you want to get by without flow diagrams, your best bet is to generate and optimize a cost function, and that can be done in any language.

I was in touch with some researchers that had implemented a shift-scheduling system in Prolog - even got a sample of the code, bit in the end we had a small roster, and managed to agree on a good rotating schedule - so I never got around to really test it...

But prior to that my plan was to try and write such a system in Prolog (or some kind of embedded logic, like mini kanten).

Note that while Prolog can find all solutions to a combinatorial problem, it can't (obviously) protect you from the complexity of searching all solutions...

I'm always happy to see Prolog get some love. Learning it in a university course really expanded my mind, similar to when I first tried FP with ML in another course.

I went to a talk about MicroKanran, another logic programming language, which is embeddable in host languages. I think it shows a lot of promise for implementing decision engines. The biggest downside is that it can be tough to reason about the runtime characteristics of nontrivial programs, right about at the point where the paradigm starts to be really useful for modeling complex problems. It takes some experience to know how to write it efficiently.

Prolog looks really useful for solving real world problems. Why doesn't it get used more?

And can you interface to it using Python or some more mainstream languages?

> And can you interface to it using Python or some more mainstream languages?

You can and should use logic programming that way. There are libraries for doing e.g. networking and user interfaces with Prolog, but it's not a particularly good fit for that kind of programming.

Many Prolog implementations are quite easy to embed to other languages and there are logic programming embedded languages for most popular programming languages. miniKanren [0] is a popular embeddable logic programming language. There's a chapter about implementing a logic programming environment in Structure and Interpretation of Computer Programs (SICP), which is pretty straightforward.

For my AI courses at the uni years ago, I implemented a simple logic programming environment in Haskell [1]. It was a very fun project and it took me just a few days of work.

[0] http://minikanren.org/ [1] https://github.com/rikusalminen/slolog

It's popular to embed a prolog in Lisp languages. For example Picolisp and Shen. I wrote a post comparing Shen's Prolog support to SWI Prolog recently: https://bluishcoder.co.nz/2016/08/30/kicking-the-tires-of-sh...

Is the license for Shen still that you own a copy of Tarver's book?

There's a BSD licensed open source version, all linked from http://shenlanguage.org/download_form.html

That's what I used for my post. There's "Shen Professional" which is a closed source subscription based product getting rapid development http://shenlanguage.org/professional.html

I have not used "Shen Professional" though.

That's good news...curious if he has any customers for the professional version. I'm sure it's a great product, I'm just not aware of too many people who would depend on something that niche' for production work. Maybe niche production work :)


The logic programming language that is used most for real world problems is SQL (the database query language).

Now, SQL may be terrible as a logic programming language (in terms of purity, generality, etc) but it's one of those things where because it's there and it's capacities are good enough for a lot of purposes, it gets used more than tools that one has to go out of one's way to find/use.

Further, note that SQL is widely reviled, partly for it's irregularity but also for the complexity that is inherent to large SQL expressions.

It seems likely to me that if Prolog ever gained the popularity of SQL, there would be umpteen OO packages out there promising to "tame the complexity of Prolog" with one or another OO wrapper, an approach which I assume would leave pure logic programmers aghast.

I do not think you are correct in saying that SQL is a logic programming language. A logic programming language is defined in terms of rules/implications[1], which SQL is not. However, it is a declarative language, which Prolog also is. In a declarative language you state what a solution is[2] (which the program then searches for), in contrast to the more traditional type of programming (imperative) that states how to obtain a solution.

[1] - https://en.m.wikipedia.org/wiki/Logic_programming [2] - https://en.m.wikipedia.org/wiki/Declarative_programming


If you look at my whole post, I hope it's clear I'm not trying to say SQL is formally anything, just that it's effectively a logic programming language.

IE in an ad-hoc sort-of way SQL can accomplish what a more formal logic programming language accomplish, more or less. You specify a world as a set of tuples and determine whether one set of relations determines another set of relations. That lets you do essentially what prolog can do - for example you can map all the example in the parent article to SQL pretty easily.

> for example you can map all the example in the parent article to SQL pretty easily

How would you map the map coloring example? You can model the color table and the neighbor table in SQL, but that's just the data model. What Prolog also gives you is backtracking search. Is there a good way to do that in SQL? I guess it would work by computing the Cartesian product of states and colors and trying a select on that...

> You specify a world as a set of tuples and determine whether one set of relations determines another set of relations. That lets you do essentially what prolog can do

Datalog, maybe, if you have the search mentioned above. Not Prolog, though: Prolog has terms with free variables and unification. Those have no simple "database" interpretation.

The featured article only shows off a small (non-recursive, negation-free) fragment of a Datalog-like language, which itself is only a small (function-free) fragment of Prolog. You can't make broad claims about Prolog from this small set of examples.

It makes some trivial things really hard. Fun fact though, Joe Armstrong et al wrote the original version of Erlang in prolog. There are some bindings, ymmv.

Erlog is an actively maintained option for using prolog from erlang. It's maintained by Robert Virding, who also maintains Lisp Flavored Erlang (lfe).

Oh really? Awesome!

> Why doesn't it get used more?

Few people know it, even fewer people like it. It tends to be taught really badly. There are not many recent books, and the old ones teach an outdated pre-ISO-standard language and tend to overemphasize use of the cut (!) operator. Practical aspects needed to build "real-world" systems are almost never discussed. (Interfacing with the operating system in various ways, exceptions and other ways of robust error handling, judicious use of mutable data structures when needed, debugging, profiling, optimization, ...)

But yes, if you get over all these hurdles, it's useful for many things.

> And can you interface to it using Python or some more mainstream languages?

Yes, but you might have to go through foreign function interfaces that are painful from both ends. Or, depending on circumstances, you can just use text files (or pipes) to communicate with an external program written in the other language.

In mainstream languages you can embed a rules engine [1]. But there are the usual issues with embedding another language - you don't do it for something trivial, just when you really need it. Also, if you're going to use a rules engine, something less powerful than Prolog might be a better fit.

[1] http://martinfowler.com/bliki/RulesEngine.html

For Python, check out Pyke or pyDatalog.

Because like FP it requires a different way of thinking.

It also didn't help that the Japan project to use Prolog a systems language for a so called 5th generation computer didn't work out.


Or the whole AI is about to be everywhere because it's working well on a few problems thing. That led to AI winter with LISP and Prolog taking brunt of it at language level. At least we haven't seen articles like that recently, right? ;)

How many prolog programmers does it take to change a lightbulb?


How many Lisp programmers does it take to change a lightbulb?

Ans. 1. (1)

Ans 2. I'll tell you after I've debugged and can run a macro I'm writing for this.

Ans 3. nil

Prolog is only appropriate for certain kinds of programs, long time ago I programmed in prolog and it was a very difficult task when the problem was not about matching and unification.

For me Lisp was much more appropiate and easy to program. Forward chaining and unification is not enough for certain class of algorithms, sometime you need backward chaining, probabilistic inference and many other ideas that are not easily translated in prolog. I remember some discussion about the semantic of clauses. For example misil(ready), is this about the misil is already ready, or that you are going to prepare the misil, etc. I find that concurrency and paralelism in prolog are not easy. In the book "on Lisp" by P. Grahan there is a prolog interpreter in Lisp, this can be interesting for somebody wishing to learn prolog.

Edit,Added, erlang was inspired by prolog, so learning prolog will help you learn erlang, but if you are a ruby type elixir is your best option.

A system written using SWI Prolog is apparently used in a large number of NZ stock exchange trades https://dtai.cs.kuleuven.be/CHR/files/Elston_SecuritEase.pdf

If you're interested in logic programming, Mozart/oz offers a better deal than raw prolog for practical problems. The reason is that you can express various search strategies using the concept of "computation spaces" in Mozart.

Instead of learning one language every year, learn new programming paradigms. The logic programming paradigm is not implemented by many languages, so, might be interesting to learn.

This helped explain a lot about how and why Prolog works. The coolest thing? That's only a tiny percent of the language, but it's already enough to start solving problems.

For those of you looking for prolog out in the real world, a surprising number of languages contain embedded interpreters for it. Mostly Lisps.

Perhaps most famously, picolisp uses a tiny, lisp-syntax prolog called pilog as a query languages for its built in database. Yes, it has a built in database. Picolisp is weird.

Picolisp is pretty awesomely weird. It has a lot of doc, but I don't find it the easiest to grok. How does the database work? I understand how you can query it with a prolog-esque syntax, but where is all the data stored? Is it just in memory, so this database lives as long as the application is running?

You can create symbols that persist to an external database file when you call 'commit' or rollback changes with 'rollback'.

Kind of like SQLite?

a surprising number of languages contain embedded interpreters for it. Mostly Lisps.

Also, Smalltalks (seriously).

Yes, my first exposure to Prolog was the toy version included with Smalltalk/V in the late 1980s.

Really? I can't say I'm entirely shocked, but still...

I've been using Prolog since 2008 ish, on and off, with more time spent using it in the last couple of years. I've written my degree and Masters dissertations in Prolog, using different Prologs for each. I code all my personnal stuff in Prolog.

If I could condense the experience of those last few years with Prolog in one sentence it would be this one: "Prolog is not your friend".

It's not an "easy" language to learn- it's not javascript but with first-order abstractions, it's not Java with a different VM. You need to grok a fair few things about both the theory behind it -in other words, first-order logic- and the implementation decisions taken to make it efficient. For instance- clause selection order or, yep, the cut. You need to understand how this works and, ahem, cut yourself too many times until you get the message. Well, I had to, anyway.

There's a bunch of stuff you (or at least, I) never have to think about in day-to-day programming work, such as depth-first search and how it can go infinite. Then you need to get some experience with your chosen Prolog interpreter and learn its idiosyncracies [1].

There are rewards. [plug] My Masters dissertation [2] is a novel algorithm that learns a grammar from unannotated text. Because it produces a grammar in Prolog's Definite Clause Grammars notation, and because Prolog can interpret DCGs as first-order logic theories, these grammars are also logic programs that model the algorithm's input. I process some text, spit out a grammar and then without needing a separate compiler I run this grammar backwards to generate new strings, or the normal way to parse them. [/plug]

There's a point in the Prolog learning curve where you risk baldness from pulling out your own hair. Once past that point... it keeps hurting you, but the things you can do with it, it's bit like magic. Or maybe it's Stockholm syndrome :)

I've seen many criticisms of Prolog (some that are repeated in this thread). Eight years in I haven't seen one single line of criticism that still makes sense after you've used the language for a few years.

So to me the real problem with Prolog is that it hurts you so much that most people give up before they really figure it out.

Oh and- should you pick Prolog up never forget this: The dynamic database is evil.

[1] http://www.swi-prolog.org/pldoc/man?section=jitindex

Just a few days ago I was working on something I thought was best tackled using dynamic predicates (written to the database) rather than an in-memory list. My program took hours to process some data, until I noticed I was querying a compound term for its second argument. Just switching arguments brought the processing time down to five minutes on the same data. Did I mention the dynamic db is evil?

[2] https://github.com/stassa/THELEMA

Prolog was part of a course I took at RIT back in the late 90s.

I ended up using it a few times since then in a professional setting to handle filling out extremely complex forms.

I even build a module on CPAN to handle creating Prolog facts from Perl data structures.

I remember Borland Turbo Prolog. But it was so long ago that I lost the disk it was on.

Its descendant, "Visual Prolog"[0], appears to have a free edition for personal use[1].

[0] https://en.wikipedia.org/wiki/Visual_Prolog "Visual Prolog, also formerly known as PDC Prolog and Turbo Prolog, is a strongly typed object-oriented extension of Prolog. As Turbo Prolog it was marketed by Borland, but it is now developed and marketed by the Danish firm Prolog Development Center (PDC) that originally developed it. "

[1] http://www.visual-prolog.com/vip/download/default.htm

A good post but (2013)

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact