Hacker News new | past | comments | ask | show | jobs | submit login
Prolog for Beginners (coursera.org)
99 points by YeGoblynQueenne 53 days ago | hide | past | favorite | 51 comments

How is it with Prolog practitioners, did it make 'click' at some point where you then suddenly approached problems differently or was it more of a slow change where you incorporated more and more ideas from Prolog in your daily work until you realized that you got it?

For me it was a slow progress. Takes a while to grab the paradigm shift and unwind some mental schemes if you are used to imperative languages and you want to start thinking in declarative terms in a way that you can be productive as a Prolog programmer.

That being said, learning about the inner workings of Prolog and how the WAM (Warren abstract machine) is composed helped me a lot to adjust my programming skills and align them to how things are best coded in the language.

Something that is still a difficult hurdle for me to overcome is how Prolog starts out feeling declarative but then very quickly you need to be aware of all the backtracking rules to correctly guide Prolog to the solution you want. I have tried to get through Clocksin's Programming in Prolog a couple times now and it still hits me how quickly you need to get into the invisible details nobody blogs about in order to be even base-level productive in it.

You do need to keep both the procedural and declarative reading of Prolog programs in your head at the same time, yes. But there is such a thing as a "pure subset" of Prolog that doesn't use a) cuts, b) the dynamic database or c) destructive assignment (with arg/3 etc) and if you stay in that pure subset you can go a long way before you need to think procedurally.

But to be honest, myself I prefer to switch between the two paradigms at will when thinking about my programs. The trick is to do it in a way that leaves you with the best of both worlds, rather than the worst.

In terms of sources that go into more advanced material and avoid excessive procedural thinking, one is Markus Triska's website recommended in the sibling comment and I would also recommend Richard O' Keefe's "The Craft of Prolog" and Sterling and Shapiro's "The Art of Prolog".

To be fair, you won't find a lot of Prolog material in blogs so you can expect a lot to be left unsaid in the few that exist.

Have a look at the "power of prolog" YouTube channel (and website) by Markus Triska.

It took a few months, working In a massive multiple million lines of code "rules engine". The first bit was really understanding how recursion works and how to trace the code.

After that close understanding of logical operations like cut, or negation, and the most important was how certain operations could really impact performance to the point the were mostly "outlawed".

It's been years since I've used prolog, but I could probably pick it back up in a few weeks.

When I learned Prolog I already had a semester of logic math so it felt natural.

It is already a bit old, but you might have a look at it.


I'm really fascinated by the resurgence of some of these old languages. I vaguely understand that the paradigm behind Prolog is quite different compared to oops or functional languages. Can anyone please help me understand where is this really useful? As everything is data oriented these days, is Prolog a good language for doing statistics or ML or scientific computing?

It's really quite different, and I don't think I can fully do it justice in a comment. I will give my favourite example though: it certainly helped me get the "aha" moment.

Let's say you have a collection of cities, and a collection of roads between them. And you want to answer the question: "is there a route from City A to City E?"

A solution in prolog would look something like this:

    /* First, the roads */

    road("A", "B").
    road("B", "C").
    road("C", "D").
    road ("D","E").

    /* Now the algoritm */

    route(From, To) :-
    road(From, To).

    route(From, To) :-
    road(From,Intermediate), route(Intermediate,To).

That's it. There's no need to loop/iterate/check conditions. Simply state the facts, ask the question, and the prolog interpreter will search for an answer:

    ?- route("A", "E").


Why do you need route?

Can't you just use the roads you've got as the base case and then say

road(X, Z) :- road(X,Y), road(Y,Z)

> Why do you need route?

Because it creates a left-recursive call that will eventually cause a stack overflow. If you want to try it out, there's Swish[0] - an online prolog editor and interpreter.

[0]: https://swish.swi-prolog.org/

Prolog is well suited to transformations, representing and processing rules, and searching. It's a good fit for any program that can be expressed as a transformations or implications. Compilers and other programs that transform data are a good fit, as are rules engines, and pretty much anything that behaves like them.

It turns out that transformations and rules engines are pretty generally useful, so Prolog makes a decent general-purpose programming language. It does take some getting used to, because its mode of operation is different from imperative and functional languages. You don't say, "do this, do that," and you don't present a functional expression for reduction to normal form. Instead, you write a set of facts and rules, then supply some initial conditions and let Prolog compute all the results that your rules imply. It can be a little brain-bending at first because most people seem to have an easier time with "do this, do that" than "this imples that, that implies the other thing."

On the other hand, for problems that are naturally represented as transformations or rules engines, it can be sort of magical. One fun detail is that, since you can specify the logical relations between terms and not an order of operations, you can commonly write Prolog programs that can be run "forward" and "backward" -- that is, given a transformation between A and B, if you supply A it computes B, and if you supply B it computes A.

Deep learning may fit in the prolog paradigm in terms of finding programs to compute the output from the input. I am sure people are working on it, it is one aspect of differentiable programming.. This is similar to functional approaches as well. The search here is for the function f that computes f(A)=B ... Today we specify the state space as a CNN or transformer models. Prolog knows how to do deductions from logical statements so you'd need something similar for mapping f I guess.

>> Deep learning may fit in the prolog paradigm in terms of finding programs to compute the output from the input.

That's interesting, but my hunch is that you'd need two deep learning models to do what a single Prolog program could do: one model to map inputs to outputs and one to map outputs to inputs.

One way to describe deep neural nets is to say that they are function approximators. Now, functions have well-defined inputs and outputs, although I don't think that's actual mathematical terminology. Functions are basically mappings from the elements of one or more sets _to_ the elements of another (or more) sets. But a function mapping is uni-directional. If you have a function ƒ: X → Y, it maps elements of X to Y, but it doesn't map elements of Y to X.

So for instance, if you have a function that maps the set of integers from 1 to 24 to the set of letters in the English alphabet, you don't simultaneously have a function that maps letters to integers- you need another function. This is true both in mathematical terms and in programming terms.

Suppose for instance that you have a Pseudocode function with signature:

  int_char(int: n) -> char: c
You can call int_char() as:

And get out "b", but you can't call it the other way:

And expect to get 2 in return.

In Prolog on the other hand there are no functions. Rather, every expression is a predicate, or in other words an n-ary relation. Now, the concept of relation is a generalisation of the concept of function, so in Prolog you could write int_char() above with the signature:

And call it with either or both arguments instantiated... or none:

  ?- int_char(2,C).
  C = b.

  ?- int_char(N,b).
  N = 2.

  ?- int_char(2,b).

  ?- int_char(N,C).
  N = 1, C = a ;
  N = 2, C = b ;
  N = 3, C = c ;
  % ... etc
Which you can't easily do with other languages. Well, you can do it in, say, C if you write a Prolog interpeter in C :-)

Anyway, deep learning learns functions, not relations, so its models can't go back-and-forth between sets in mappings. In theory anyway; deep neural net often don't really care about theory.

No, prolog is weak for things involving concrete calculations. It is absolutely stellar for anything involving using/building a logical transformation framework, such as building provers, compilers, solvers and the like. Most people will tell you it's slow, but that's comparing very specialized solvers built in C++ and the like. The yield/effort ratio of prolog programs for suitable problems is very high.

> I'm really fascinated by the resurgence of some of these old languages.

Not wanting to take away from your fascination ;) but even though Prolog is from 1972 (which may or may not be old, depending on your PoV), core Prolog syntax is pretty much a minimal language for representing terms and term unification. Doesn't get anymore straightforward than that, whether modern or not.

Prolog is by itself no language for number crunching or SIMD, nor does it usually deal with memory alignment, sparse matrices, or other data layout concerns in the way C++ for CUDA does. OTOH, Inductive Learning (ILP) has been a major application of Prolog as recent as 2012 (ProGolem, Aleph, etc., some of which have been taken into commercial endeavors). But I'm not familiar enough with ML to contrast with last decades's ML push/approaches, let alone have an opinion.

I think that, in a better version of this world, we'd be using logic programming for interacting with databases. Logic programming specializes in describing relations between data, so to me it seems like the perfect paradigm for database queries.

Some people at google seem to be working on this idea: https://opensource.googleblog.com/2021/04/logica-organizing-...

I'm a Prolog newbie myself, but an experienced ML practitioner.

Doing "high-level" model-building in a Prolog-like DSL might be interesting, but lower-level numerical code seems like a chore.

The real "value" of Prolog (in my novice estimation) is querying data (i.e. Datalog), transforming between data formats, and building "logic-driven" applications like command-line tools, API clients, etc.

>> I'm a Prolog newbie myself, but an experienced ML practitioner.

How experienced? Inductive Logic Programming has been a thing since 1991 :-)

Let's go with "ML as in applied statistics", not the traditional stuff.

Fair enough, that's the common interpretation.

I should have known to be careful with that term in a Prolog thread!

Nah, I'm just a compulsive pedant :P

Statistics, no. Machine learning, yes. The most successful inductive programming approaches (learning programs from data) are Inductive Logic Programming approaches that basically learn Prolog programs from data. Prolog as a language with an automated deductive theorem prover as an interpreter is uniquely suited to learning new prorgams, because, it turns out, induction (learning new theories from observations and existing theories) is the inverse of deduction (explaining observations with existing theories).

For some modern examples of learning Prolog programs with Prolog see:

1. Metagol:


2. Louise:


3. Popper:


The first repo has a list of biblio right at the end, though it's a bit too formal maybe.

Full disclosure: I started my PhD on Metagol and Louise is my work.

For scientific computing, check out this paper:

Using logic programming for theory representation and scientific inference


And see repo with code here:


> is Prolog a good language for doing statistics or ML or scientific computing?

Not really, but the interesting thing about Prolog is ILP (inductive logic programming). ILP does things current deep learning can not: learning compact models, in a data-efficient manner, while utilizing explicit background knowledge. It has its own limitations though, related to noise tolerance, scalability and efficiency.

It is true that these were problems with ILP. But, have a look at the Top Program Construction paper:


The experiments section shows one example of learning a program with a few thousand clauses and one example of learning with classification noise (i.e. misclassified examples), all in polynomial time and without "magic parameters" to overcome noise.

P.S. Er, actually I'm the corresponding author of that paper so eh, take my comment with a grain of salt as usual when researchers promote their research...

I am very interested by ILP for some medical meta research, e.g. learning clinical guidelines (= care pathway for a particular condition) from papers on a topic. Do you think the current state of the art makes that possible? Would you be interested in such a thing (am european academic MD)?

There would be a lot of noise, but there are big databases where the conclusions of clinical papers are already in predicate form.

>> There would be a lot of noise, but there are big databases where the conclusions of clinical papers are already in predicate form.

What, really? Well in that case, that's ideal.

To clarify: yes, it's very very possible with the current state of the art. If you had unstructured text you'd have to do a lot of leg work to put it into a structured form, but if it's already in some kind of formal er form, then that's perfect.

I am interested, but I'm on the final year of my PhD and about to start on my thesis (I have a second paper to submit in May 1st, then it's thesis writing time after that). Which means I don't know exactly how much I can justify other work. But I would be definitely interested to demonstrate a real-world application of the system described in the paper I link to above. Or, if you're more interested in a different system, I can at least advise you how to use it.

Would you like to get in touch using the email in my profile? I trust you'll know how to use vim :-)

Loved the Thelma and Louise connection!

Thanks a ton for all these answer. Definitely had a couple of a-ha moments from these. Although I don't have enough time in hand at the moment to learn Prolog. But it's definitely one of the things I'll experiment with. Thank you all again. :)

Prolog was the first programming language I was taught at university. I’m not sure it was the best introduction to programming as it’s quite different from most other languages. I think it expressed the high-brow attitude of the lecturer.

Same for me, but I do think it is a good introduction.

It could just be my bias, but everyone I have met who did some Prolog after Java and C said they found Prolog difficult.

None of my fellow Prolog-first cohort seemed to have any problems with other languages.

I did some Prolog in college and always thought it was neat. I was able to internalize and reapply the idea of data and relation to make smaller rules engine type things in other apps. You see some similar patterns in some of the declarative build languages and here and there. I don't feel like I fully groc Prolog though.

For example, how do you write fast Prolog? Often its fast but if its not, I feel a bit helpless. Are there any good articles with a bigger's look at Prolog performce?

Firstly, as the author of the course, many thanks for the publicity.

Secondly, a teething problem which caused the video lectures to bring up an error screen has been fixed.

It's my first attempt at teaching (I hope that's not too obvious). I volunteered to be a guinea pig for Coursera's new "community guided project" courses, which has been very educational for me. I hope others manage to learn something from it too.

Are there any Prolog engineer jobs, or is that usually done from another role? How could one find a Prolog job?

Prolog is great, but I wonder if it would be more successful as a library than as a stand-alone language? It seems like a tool you would reach for for very specific tasks in an application, rather then something you would program a whole application in.

See minikanren and The Reasoned Schemer for an embedded (into a host language) logical/relational language that does what you describe. I think there's a lot of value in learning Prolog on its own, but if I were to try and persuade an employer to use the logical/relational paradigm in a project again I'd probably use a minikanren implementation in a language they already utilize.


well if you're looking for such a thing:

https://apice.unibo.it/xwiki/bin/view/Tuprolog/ for Java

http://tau-prolog.org/ for Javascript

Thanks! I loved this class in college, Will be fun to study this material again

Would be great if it is followed by a more advanced class.

I feel like there's a big online niche for intermediate-level programming tutorials. I googled around a few months ago for SQL resources to assist a team member who'd progressed past basic operations and joins and found little of use. I ended up writing my own set of questions like "write a query to remove duplicate records from this table".

I've given this a lot of thought. My main conclusion is that independent learning usually breaks down at intermediate levels.

Let's say a novice is someone who has no skill in a topic. A beginner is someone who knows a bit. Going from novice to beginner means increasing your skill level to the point that you can complete some tasks.

An intermediate is someone who can do more than an a beginner. To go from beginner to intermediate, you build on your foundation. Wait, what foundation? What skills do you already have? What did you learn when going from novice to beginner?

We don't know. This is the huge difference between novice -> beginner and beginner -> intermediate. The second transformation has dependencies.

There are solutions to this that aren't hard. When you run into an unmet dependency (the author assumes you know something you don't) backtrack and learn it. This works in theory and usually fails in the real world.

Most people hate unmet dependencies of knowledge. Hate hate hate it. They make them feel dumb and people hate feeling dumb. (You are most likely this way and don't realize it. If not, you have a strength and you should be using it).

I believe this is the problem that traditional learning solves (a bit) and independent learning does not. You can rant about how much colleges suck (I do sometimes) but they provide a few important things for real people learning: A track meant to avoid unmet dependencies, some amount of resources to backfill unmet dependencies, and social pressure to stick through it even when you hate it.

Hyperlinks and footnotes. I wish blog/tutorial content making use of various libraries or non-trivial language features would link to something providing more information if it's available. Even a citation (see "The Go Programming Language" section 4.5) would be helpful.

I'm learning Go (relearning, I learned it when it first came out but never used it) and came across something like this:

  type Record struct {
    Name string `json:"name"`
My first thought was, "Why is there a reference to JSON here?", the reason became apparent in the next section (so the order of presentation was poor to begin with), but my second thought was, "The language/compiler understands JSON?" If, on the first instance of this form, there were a link, margin note, or footnote to an explanation then the novice (either a programming novice or a language novice) would be able to more effectively dive deeper into the material in a guided fashion.

This is what intermediate-to-advanced material can do to really help out, especially for people who want to see real world use-cases and for one reason or another don't want to or haven't fully gone through all the basic material (or just forgot parts of it).

Backfilling the dependencies is a graph traversal problem. An author can put work in and add references for everything and it will help a lot. The problem is as soon as those links are to something that doesn't follow this pattern (most things), the student will be in the same position.

That's a good point, and certainly an argument for intermediate courses being directly linked to novice courses which cover the prerequisites.

I believe there's also a factor that intermediate knowledge requires much more of a conceptual grasp. Selecting or joining on the correct columns is one thing, but actually knowing when to use what operation and which might have better performance means you need an intuition about the data structures. This is much harder to teach, and probably the only really good way is to get people to attack lots of problems set at the right level of challenge.

Regarding intermediate level SQL, I'd highly recommend Jenniffer Widom's edx courses[0]. They have a good mix of cs theory and practical exercises, which combined gave me a big push in my approach to databases.

[0] https://www.edx.org/course/databases-5-sql

I've had the same problem with SQL for ages until a few months ago when I had bunch of small trial by embers (not really fires) at a job I had just started.

Overall, I agree that there's a dearth of intermediate content out there.

I can't launch the Swish editor connected with the course. LTI error, which seems to be something to do with React.

There was a glitch which has been fixed. Hopefully it will work now.

Same for me.

Great resource - after appreciating the power of prolog as a database query language, graduates can move immediately to see that idea in action: https://github.com/terminusdb/terminusdb

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact