Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What's Prolog like in 2024?
427 points by overclock351 9 months ago | hide | past | favorite | 276 comments
Hi, i am a compsci student that stumbled upon prolog and logic programming during my studies.

While i have seen the basics of vanilla prolog (atoms, predicates, cuts, lists and all that jazz) and a godawful implementation of an agent communication system that works on SICStus prolog. I would like to know more because i think that this language might be a powerhouse in per se.

Since my studies are quite basic in this regards i would like to expand my knowledge on it and kind of specialize myself both in this world and another world (ontologies :D) that i really enjoy.

What's prolog like in 2024? what are you wonderful people doing with it?

thanks from a dumbass :D




Prolog has reached an exciting new milestone with Scryer prolog. It is the first highly performant open source iso-compliant Prolog.

I would check out Markus Triska's work to have your mind blown:

https://www.metalevel.at/prolog

https://youtube.com/@thepowerofprolog


I interviewed and helped hire Mark Thom, the original author of Scryer. I also follow Scryer with interest, even though most of my limited Prolog use has been with SWI Prolog (and one large project with ExperProlog in the 1980s).

One thing to check out: Prolog plays fairly well with Python, providing opportunities for hybrid projects.


To playing well with Python, this was on a front page some time ago: https://arxiv.org/abs/2308.15893

"The Janus System: Multi-paradigm Programming in Prolog and Python"


I am quite pleased with the ability to easily use prolog from within python and vice versa. It makes it now one of the easiest and most expressive solvers to plug into for my tastes. I'm starting to accumulate useful solvers here https://github.com/philzook58/prologsolvers/tree/164297d87f6...

You need to install swi prolog https://www.swi-prolog.org/download/stable and pip install janus_swi

A simple example to get started: https://www.swi-prolog.org/pldoc/doc_for?object=section(%27p...

  import janus_swi as janus
  janus.consult("path", """
  edge(a,b).
  edge(b,c).    
  edge(c,d).

  :- table path/2.
  path(X,Y) :- edge(X,Y).
  path(X,Y) :- edge(X,Z), path(Z,Y).
  """)
  list(janus.query("path(a,Y)."))


On the topic of multi-paradigm programming, including logic programming, Oz/Mozart is an obligatory mention. See CTM and http://mozart2.org/mozart-v1/doc-1.4.0/tutorial/index.html.

The authors were fairly prominent Prolog researchers. It's sad Van Roy is retiring and nobody is taking this forward. AliceML, a StandardML dialect inspired by Oz is also abandonware.


Hey, thanks! That looks cool.


How do you normally use Prolog and Python together? I had looked into embedding logic programming within Python in the past, and found a lack of satisfying options, but maybe I didn't know where to look.


I have two short examples in one of my books that I am currently re-writing. Here is a link directly to the Python+ Prolog interop examples https://leanpub.com/pythonai/read#use-predicate-logic-by-cal...


Thanks for the link. I have played with PySwip (https://github.com/yuce/pyswip), and the MQI looks like a more maintainable approach to integrating SWI-Prolog with Python (https://github.com/SWI-Prolog/packages-mqi).

The biggest source of friction I noticed when playing with PySwip was that because Prolog code was represented as strings, you avoided generating it on the fly. It would be nice to have an embedded DSL for Prolog in Python. (I am thinking something like SymPy or the Pony ORM—https://github.com/ponyorm/pony.)


I noticed the same friction while trying to integrate Answer Set Programming solvers into Python projects. The people who built the dominant ASP solver actually provide nice solutions though. Possible inspiration for Prolog tooling:

Clorm (Clingo ORM) [1] makes it easy to create facts after you define simple predicate Python classes. Here's an example project of mine which uses it to set up a scheduling problem (Python -> ASP) and to present the results (ASP -> Python).

https://github.com/raceconditionrunning/relay-scheduler

Clingo (the solver) exposes its internal AST implementation through Python bindings[2], so you can build up rules or other statements from typed components instead of strings. This simplifies the translation bits of implementing an ORM or whatever kind of wrapper a developer would prefer.

[1] https://github.com/potassco/clorm [2] https://potassco.org/clingo/python-api/current/clingo/ast.ht...


This is cool! I am glad to see that other people have thought in the same direction—and actually wrote the code. I have another reason to learn ASP.


Don’t go looking at pony’s source code for inspiration. The api is neat but when your app starts to get complicated it begins to not work well.


Thank you!


Do you have any papers comparing Scryer with other prolog systems (like SWI-prolog or SICStus prolog) performance-wise ?


There are some benchmarks here of SWI Prolog's benchmark suite on diffrent Prolog systems by Jan Wielemaker the SWI Prolog author:

https://swi-prolog.discourse.group/t/porting-the-swi-prolog-...

He finds Scryer performs worse, which he does comment on, he also explains some tradeoffs and historic choices in SWI's design which affects its performance. I think I have seen the author of Scryer saying that's not surprising and Scryer is still building up core functionality where SWI has had 30+ years to optimise, but I don't remember where I read that.

SWI has a document explaining some strengths and weaknesses regarding performance: https://www.swi-prolog.org/pldoc/man?section=swiorother

Edit: some discussion on Scryer previously on HN: https://news.ycombinator.com/item?id=28966133


Another table (in the same thread) comparing more systems: https://swi-prolog.discourse.group/t/porting-the-swi-prolog-...


So SWI appears to be more performant, it has an open license, so as per the GGP's claim regarding Scryer in the post above, it must not be ISO-compliant?


That's right; comedian Emo Phillips had a bit about it:

"Once I saw this guy on a bridge about to jump. I said, "Don't do it!" He said, "Nobody understand me." I said, "What's so special about you?"

He said, "I'm a computer guy." I said, "Me too! Desktop, tablet, console, smartphone?" He said "Desktop, mostly", I said "Me, too! Mac, Linux or Windows?" He said, "Any, I'm a programmer." I said, "Me, too! which style? OOP, Imperative, Functional, Logic, Array, Stack" He said, "Logic." I said, "Me, too! What subset? Answer Set Programming, Abductive Programming, Prolog, Datalog?" He said, "Prolog." I said, "Me, too! Conformant with the ISO/IEC 13211-1:1995 (core) standard term syntax for the period character or non-conformant extention decried by members of the ISO/IEC JTC1 SC22 WG17 working group?"

He said, "SWI Prolog 7" I said, "Die, heretic!" And I pushed him over."

- https://news.ycombinator.com/item?id=26624442

or read more seriously here:

- https://www.complang.tuwien.ac.at/ulrich/iso-prolog/SWI7_and...


I can't help.

> "Once I saw this guy on a bridge about to jump. I said, "Don't do it!" He said, "Nobody understand me." I said, "What's so special about you?"

He said: "I don't want to jump".


I don't get it

* Jumps off a bridge


A key performance attraction of Scryer Prolog is its space efficiency for representing lists of characters, yielding a 24 times (!) more compact representation than a naive implementation would.

With Scryer Prolog and other recent systems that implement this representation, such as Trealla Prolog, we can easily process many GBs of text with DCGs, arguably realizing the full potential of the originally intended use case of Prolog for the first time. Trealla Prolog goes even further already, and allows overhead-free processing of files, using the system-call mmap(2) to virtually map files to memory, delegating the mapping to the operating system instead of the Prolog system.

The linked benchmarks do not test these aspects at all, and in addition use a version of Scryer Prolog that was completely outdated already at the time the benchmarks were made: The benchmarks use Scryer Prolog v0.8.127, which was tagged in August 2020, more than 3 years (!) before the benchmarks were posted. The linked benchmarks thus ignore more than 3 years of development of a system that was at that time 7 years old. Newer versions of Scryer Prolog perform much better due to many improvements that have since been applied. More than 1700 commits were applied between these dates.

In the face of the 24-fold reduction of memory use that the above-mentioned efficient string representation enables, small factors of difference in speed between different systems are in my opinion barely worth mentioning at all in any direction.

And yes, in addition to this great space efficiency, the strong ISO conformance of Scryer Prolog is also a major attraction especially when using it in highly regulated areas. For example, here is a recently envisaged application of Scryer Prolog in the context of machine protection systems (MPS) of giant particle accelerators, where adherence to industry standards is of great importance for warranty reasons among others:

https://github.com/mthom/scryer-prolog/discussions/2441

As another example, a medical application of Scryer Prolog, in the highly regulated domain of oncology trial design:

https://github.com/mthom/scryer-prolog/discussions/2332

Here is an overview of syntactic ISO conformance of different Prolog systems:

https://www.complang.tuwien.ac.at/ulrich/iso-prolog/conformi...


>> The linked benchmarks do not test these aspects at all, and in addition use a version of Scryer Prolog that was completely outdated already at the time the benchmarks were made: The benchmarks use Scryer Prolog v0.8.127, which was tagged in August 2020, more than 3 years (!) before the benchmarks were posted. The linked benchmarks thus ignore more than 3 years of development of a system that was at that time 7 years old. Newer versions of Scryer Prolog perform much better due to many improvements that have since been applied. More than 1700 commits were applied between these dates.

In the SWI-Prolog discourse thread linked above this is pointed out to Jan Wielemaker who clarifies it was a mistake. He then repeats the benchmark comparing a newer version of Scryer to SWI and finds that Scryer has improved significantly:

Updated Scryer Prolog to 0.9.3. They made serious progress. Congrats! The queens_clpfd.pl and the sieve.pl benchmarks have been added. The ISO predicates number/1 and retractall/1 have been added. I had to made more changes to to get the code loaded. Creating a module with the programs and some support predicates somehow did not work anymore (predicates became invisible). Loading a file programs.pl from directory holding a subdirectory programs silently loaded nothing until I added the .pl suffix. The sieve bar is cut at 20, but the actual value is 359.

https://swi-prolog.discourse.group/t/porting-the-swi-prolog-...


> adherence to industry standards is of great importance for warranty reasons among others

This is mostly a nice talking point rather than an actual thing, right? Scryer's license contains the usual all-caps NO WARRANTY and NO FITNESS FOR A PARTICULAR PURPOSE wording. Also, the links you provided describe these applications without references to warranties and standards and regulation. The users in these super-sensitive domains don't seem as sensitive about them as you claim.


> the links you provided describe these applications without references to warranties and standards and regulation.

This is not true. For example, quoting from page 2 of the paper that is linked to in a discussion I posted, An Executable Specification of Oncology Dose-Escalation Protocols with Prolog, available from https://arxiv.org/abs/2402.08334:

"Standards are of great importance in the medical sector and play a significant role in procurement decisions, resolution of legal disputes, warranty questions, and the preparation of teaching material. It is to be expected that the use of an ISO-standardized programming language will enable the broadest possible adoption of our approach in such a safety-critical application area. For these reasons, we are using Scryer Prolog for our application. Scryer Prolog is a modern Prolog system written in Rust that aims for strict conformance to the Prolog ISO standard and satisfies all syntactic conformity tests given in https://www.complang.tuwien.ac.at/ulrich/iso-prolog/conformi...."

Regarding warranty guarantees of Scryer Prolog, may I suggest you contact its author if you need to negotiate arrangements that are not catered for by the only licence terms you currently have access to?

One important advantage you get from the strict syntactic conformance of Scryer Prolog is that it reliably tells you what is Prolog syntax and what is not. In this way, you can use it as a free reference system to learn what Prolog is. The conformance makes it easier to switch to other conforming systems, such as SICStus Prolog which also offers different licences and commercial support, when you need to.

> The users in these super-sensitive domains don't seem as sensitive about them as you claim.

I am at a loss at this phrasing and also about the content of this text. Apart from the facts that I did not use the wording "super-sensitive", and that the importance of standards is explicitly stated in the paper I quoted above, is there even the slightest doubt about the great importance of standards when building and operating giant particle accelerators or devising dose escalation trials in clinical oncology?


I acknowledge that you also included your nice talking point in a paper you published on arXiv. Citing yourself doesn't convince me any more of the credibility of this argument.

> is there even the slightest doubt about the importance of standards when building and operating giant particle accelerators

The particle accelerator application is a checker for existing JSON config files. The accelerator is already running with those files. The proposed project is in an early stage. The checker will add more assurance, which is nice. The checker's author does not talk about the importance of warranties or standards. The checker could just as well be implemented in some non-ISO dialect as long as that dialect has a reliable specification and implementation.

So yes, there is the slightest doubt.

Edit: BTW, your oncology paper heavily uses CLP(Z), which does not have an ISO standard, so your argument is... The base language must be standardized, but arbitrary nonstandard extensions are OK? Please clarify as I've probably misunderstood.


> CLP(Z), which does not have an ISO standard

CLP(FD/Z) is a candidate for inclusion in the Prolog standard: Several Prolog systems provide it with notable commonalities in features, it fits perfectly into the existing language, and it follows the logic of the standard including its error system. It can even be implemented within Prolog, provided a few basic features are present in a Prolog system. For instance, the CLP(Z) system I provide and which is used in the paper runs with little modifications already in several different Prolog systems, including SICStus, Scryer and Trealla. CLP(FD/Z) is an admissible extension of the existing standard:

    5.5 Extensions

    A processor may support, as an implementation specific
    feature, any construct that is implicitly or explicitly
    undefined in the part of ISO/IEC 13211.
This is completely different from modifications of the standard that do not fit at all into the standard. For instance, interpreting double-quoted strings differently from what the standard prescribes is not an extension in the sense the standard defines it, but a modification of the standard.

In addition, Scryer Prolog has an execution mode where all its extensions are turned off. This is called a strictly conforming mode, and is also prescribed by the standard:

    5 Compliance

    5.1 Prolog processor

    A conforming Prolog processor shall:

      ...

      e) Offer a strictly conforming mode which shall reject
      the use of an implementation specific feature in Prolog
      text or while executing a goal.
In Scryer Prolog, the strictly conforming mode is the default execution mode.

Regarding the other points you mention: Even though it may sound easy to say "as long as that dialect has a reliable specification and implementation", I know no such system that exists, and what I see from systems that do not adhere to the Prolog standard makes me doubt that such a thing is possible. The systems that do not follow the standard often have elementary syntactic problems, such as reading a Prolog term that they themselves emit into a different Prolog term, a recipe for disaster and unacceptable in every domain I know.


> For instance, interpreting double-quoted strings differently from what the standard prescribes is not an extension in the sense the standard defines it, but a modification of the standard.

Agreed, but also minor as you can and should set the double_quotes flag, otherwise your program doesn't have portable semantics even among ISO Prolog systems.

> Even though it may sound easy to say "as long as that dialect has a reliable specification and implementation", I know no such system that exists, and what I see from systems that do not adhere to the Prolog standard makes me doubt that such a thing is possible.

Of course it is possible to program against the quirks of a given implementation. That's what you yourself are doing with your CLP libraries. As you note, your main target has different quirks from other targets.

More broadly, Scryer itself demonstrates that it's possible to program against a programming language that doesn't have an ISO standard but does have a good enough specification and an implementation that adheres to that specification.

> The systems that do not follow the standard often have elementary syntactic problems, such as reading a Prolog term that they themselves emit into a different Prolog term, a recipe for disaster and unacceptable in every domain I know.

You're painting with a very broad brush here. What implementations, and what kinds of terms? If your examples involve infix dot, that would be the kind of term nobody uses and nobody should use in modern Prolog, as you well know. Some of these syntactic problems only appear if you go looking for them. Minor syntactic annoyances will be caught in testing.

I agree that such things are bad, but they are knowable, controllable, and quite probably much less relevant in practice than you suggest.

Very very tangentially: The company I work for is very serious about its software supply chains. If we want to use external software for development, we must apply for permission. For that permission, actual programmers and lawyers trawl through the code and licenses and documentation. Scryer's license file lists one copyright holder, and there are many source files without copyright headers, and then there are many source files with copyright headers that name another copyright holder. Our lawyers would not allow us to touch such a system. If you're serious about promoting Scryer as a serious Prolog for serious use, you might want to consider cleaning this up.


Or maybe the GGP was wrong about performance?

Some default settings in SWI are not ISO-compliant (for example, it uses a string type that does not exist in ISO). But these are minor things that won't usually trip you up when feeding it ISO code. You can set flags to get it to conform in the way you want. And you should set flags whenever you want your ISO Prolog programs to be portable, because the standard is very lax and leaves a lot of things implementation-defined. But it specifies the flags to get implementations into the state you want.


Prolog, and Constraint Programming especially are great to have in your toolbox. I’ve done research in the field for years, and my job in the industry today is writing Prolog. There are real issues with Prolog:

- no proper module nor package system in the modern sense.

- in large code bases extra-logical constructs (like cuts) are unavoidable and turn Prolog code into an untenable mess. SWI prolog has single-sided unification guards which tackle this to a degree.

- lack of static and strong types makes it harder to write robust code. At least some strong typing would have been nice. See Mercury as an example of this.

All being said, Prolog is amazing, has a place in the future of programming, and gives you a level-up understanding of programming when you get how the types in every OO program is a Prolog program itself.


I'd advise to not use Prolog as general-purpose programming language, but as an embedded DSL or as a service for the part it's really suited for (if your app involves exploration and search over a large combinatorical space in the first place, such as in discrete optimization in industry, logistics, and finance). You really don't need yet another package manager and pointless premature modularization for modelling your business domains in optimization.


I've tried casually to take this approach and what I've found is that basically none of the Prologs out there are really designed to be properly embeddable. Even Scryer Prolog, written in Rust, isn't really set up to linked into a Rust program and run this way. I was able to "sort-of" make it happen, but it wasn't a workflow that had been optimized for.

To be clear what I'd like is to be able to fire up a thread hosting a Prolog runtime and stick predicates into it and query it using an API in the host language's syntax. Instead the best I could do was munge strings together and parse result out, sort-of. And that was after a bunch of time spent trying to reverse-engineer Scryer's API.

I would love to embed a Prolog to host my application's business rules and knowledge. I could see it super useful in a game even (think of the myriad of crazy rules and interactions and special cases in a game like Dwarf Fortress...)


Yeah the preferred approach would be to run a Prolog engine as a service and access it via usual JSON-over-HTTP/REST protocols. Has the benefit of being able to adapt and scale/provision the specific Prolog engine load as well. For smaller projects I guess you could use Minikanren which is specifically for embedding as I understand it, but even standard job shop scheduling and factory/office resource planning tasks would be better served with a Prolog (micro-/whatever) service already IMO.


At this point, why not use one of the many other CP solver packages out there and the layers on top like OR-tools?


The domain-specific Prolog code bases you're going to create still can become large and represent a significant development effort. Prolog being an ISO standard with many conformant (or at least mostly conformant) implementations available and relatively strong mindshare and ecosystem compared to extremely niche "CP solver packages and OR-tools" (which one exactly?) significantly reduces project risks such as not being able to find experts, the system not meeting functional or performance requirements, or becoming obsolete down the road. The same cannot be said for some mythical "CP solver packages and OR-tools"; you've nowhere to go if your "CP solver packages and OR-tools" project fails. Optimization and scheduling/planning projects, by their nature, are somewhat experimental and need exploration. It would thus be very difficult to pick "CP solver packages and OR-tools" upfront.


> extremely niche "CP solver packages and OR-tools" (which one exactly?) significantly reduces project risks such as not being able to find experts, the system not meeting functional or performance requirements, or becoming obsolete down the road. The same cannot be said for some mythical "CP solver packages and OR-tools"

"Mythical CP solver packages and OR-tools"? Lol!

Google's OR-Tools [1] has been winning golds in the MiniZinc Challenge since 2013 [2]. In 2023 it won Gold in most categories and the only Prolog to win, SICStus Prolog, took one Silver back. I'm curious where you're spending time in the community if you can word a comment like this as OR-Tools is the behemoth in this area and Prolog is the weird hacker siren call that appeals to people who certainly aren't about to publish anything novel in the space.

I say this as someone who has tinkered a lot with Prolog over the years and finds Prolog's execution model to work really well with the way I think about programming. Prolog and its hybrid solver model just isn't good enough at any one thing to make it SOTA. It's fun to tinker with, but I just don't think in 2024 it has enough to offer anyone who's not interested in the language or the WAM to make it worth exploring, especially not as an embedded constraint solver.

[1]: https://developers.google.com/optimization/

[2]: https://www.minizinc.org/challenge/


Ok but is the argument now just because it's by Google with its media presence it's not niche? "Constraint solving" is an overly broad term that can encompass most of computer science but at least finite domain solving, interval propagation, and SAT solving as specific algorithmic approaches with very different applications. Going by your post, potential users looking for planning, scheduling, or optimization problems to solve will jump to very specific implementation techniques having won synthetic benchmarks in basically unrelated domains for showcasing "something with constraints". Classic OR is about optimizing systems of linear (in-)equations, but discrete optimization problems and most financial investment planning problems don't come in this flavor and really need very laborious encodings (like into equational systems with 1000s of artificial variables) to fit OR model formulation requirements or SAT checkers. Great, now you're prematurely solving idiosyncratic representation problems; your consultants surely will rub their hands. But I stand by my opinion that Prolog is, by far, a much better starting point for the kind of explorative programming required in this domain. Making Prolog fast/scaling on mainstream cloud hardware (like Quantum Prolog and SICStus are doing) has very much to offer to users, and is behind many or even most real-world scheduling and optimization applications.


I agree with your point, I just would like to point out maybe the OR-tools they meant is the one made by Google, so a specific one: https://developers.google.com/optimization


Yes that's the one sorry. I typed the comment hastily and didn't go back to edit it later.


Which one is as developed, as universal and as capable as Prolog with CLP and/or DCG?

Serious question, I'd like to have something that's easy to integrate with Node.js.


OR-Tools has a lot of bindings in different languages, though Javascript/Node doesn't seem to be a first-class supported environment. Looks like https://www.npmjs.com/package/node_or_tools ports a few solver packages into Node so if those solvers fit your needs you can use this package.


To me this makes Prolog sound like a tool to reach for similar to SQL. Specialized language for asking specific kinds of search or query over your data.


Indeed Prolog programs are also called databases sometimes. Some things Prolog can do over SQL:

- infinite data defined by recursive predicates

- flexible data structures (think JSON but better, called complex terms) and a way to query them (called unification algorithm)

- execution strategy fine-tuned for reasoning (called resolution algorithm). You can do this with SQL but you’d have to formalize things using set operations and it’d be very very slow.

On the other hand, SQL can query plain data very very fast.


I've also wondered why Prolog or at least Datalog isn't available/used more widely as a query layer, especially considering that the promise of SQLs natural-ish language really didn't lead to a level of adoption among non-tech workers even approaching the popularity of the spreadsheet, so the reason for that style of syntax didn't really pan out, and Prolog would appear to have some syntactic and capability advantages.


I concur, Prolog particularly excels at being an advanced configuration, embeddable DSL that allows one to express system configurations that would otherwise be not easily possible using a bespoke configuration language or a format. I have used an embedded Prolog core to express complex installation configurations in the past with a great success, and I would do it again for the right problem space.


The cluster autoscaler in Kubernetes uses a constraint solver. It's translating configuration against dynamic, and changing state within the cluster.

Using something like an embedded Prolog or miniKenran as the core of a Kubernetes operator is something I've wanted to try my hands on.


> when you get how the types in every OO program is a Prolog program itself

"Any sufficiently complicated type system contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Prolog."


Any sufficiently complex type system is indistinguishable from an esolang.


Maybe it's just me but I see a lack of a package manager as a massive, massive pro. I can't stand how seemingly every language has a package manager which requires it's own installation and you have to learn how to use THAT thing and then you need some library off github that does some minor task really well but you can't just download the fucking code, you have to import it via, idk, the Fork-Lyft manager which requires Python 3.3 and the PillJump framework and it's just like, I just want a fucking function to parse JSON, I don't want to saddle my system with 600 MB of shit I don't need.

Old_man_yells_at_cloud.jpg


You can always just download the code, nobody's forcing you to use a package manager. It just turns out that unless you want to spend most of your life building and fixing other people's code, it's much easier to use the package manager. The inefficiency is the price we pay, but it's worth it.


Me too! I absolutely see a lack of package manager as a pro. I also hate to saddle anything with 600MB I don't need. 100% agree.

I would go as far as to say that Prolog is more a problem solving language rather than a system building language. Package managers and module systems are for modularization of big systems. You don't need that when solving small recurrent problems. Furthermore, lack of them forces you to avoid dependencies, that most of the time would end as technical debt. IMHO.


don't confuse "module system" with "package manager"


You write Prolog code for a living? Where? Do you happen to have a story to share? I'm very curious.


Yes :) we make software that helps sell complex products (If your product has a million options and takes up a whole factory floor you can’t just have a series of dropdowns)


Thanks for the insight, Görkem! I was always thinking CPQs make a really good use case. We had so many problems with performance, not your product, CPQ is becoming a standard software for pricing contracts.


I suspect that his work is related to this: https://www.tacton.com/products/tacton-cpq/configurator/


Yes :)


There are a lot of problems that Prolog / Constrain programming will solve very elegantly, and much more easily than imperative languages. I think constraint based programming is seriously under used in the industry, and too many programmers are unaware or unable to write constraint based code. I have always hoped to have just a constrain based programming subsystem in a lot of languages, for those niche cases.


It's great to hear new people are interested in the language! I was enlightened a couple years ago and fell in love.

Currently I'm focusing on creating easy-to-use embeddings of Trealla Prolog using Wasm. You can find my TypeScript library here: https://github.com/guregu/trealla-js and Go library here: https://github.com/trealla-prolog/go. The goal is to make the libraries as painless as possible. Trealla is a portable and lightweight Prolog written in C that supports CLP(Z) and is broadly compatible with Scryer. It's quite fast! I'm currently using it for some expert system stuff at $work and as an internet forum embedded scripting language for $fun.

Speaking of Scryer, they recently got their WebAssembly build working and I hope to contribute a JS library for them in the future as their API stabilizes. Scryer and Trealla are both aiming for ISO compatibility, so it's my hope that we can foster an ecosystem for modern ISO Prolog and provide more embeddings in the future. It's super convenient to get logic programmer superpowers in your favorite language. Also check out Scryer's new website: https://www.scryer.pl/

For something on the silly side, check out https://php.energy. Prolog Home Page, it's web scale :-). It's proof that you can integrate Prolog with bleeding edge stuff like Spin (server-side wasm ecosystem).



>Q: What if Prolog is not suitable for my employer’s problem domain?

>

>Prolog is not suitable for any problem domain, although this is more readily apparent for some domains than others.

At least they are honest about it LOL


Dang, substitute Lisp for Prolog and this describes me. Seriously though - Prolog is an awesome tool to have in your toolbox. I've implemented Prolog-like logic programming solutions in several places in my 40+ years of programming. Like rules for assigning molecular mechanics force field atom types.


> Like rules for assigning molecular mechanics force field atom types.

Can you describe a bit more how prolog helped you here? Thanks!


If it's an official production system you want, then use OPS-5, not Prolog!

https://en.wikipedia.org/wiki/OPS5

>OPS5 is a rule-based or production system computer language, notable as the first such language to be used in a successful expert system, the R1/XCON system used to configure VAX computers.

>The OPS (said to be short for "Official Production System") family was developed in the late 1970s by Charles Forgy while at Carnegie Mellon University. Allen Newell's research group in artificial intelligence had been working on production systems for some time, but Forgy's implementation, based on his Rete algorithm, was especially efficient, sufficiently so that it was possible to scale up to larger problems involving hundreds or thousands of rules.


I used DEC's VAX OPS5 for a couple years about around 1990. I quite liked it, and the later versions had some really nice extensions over Forgy's original design.

Then we discovered that our particular rule base could easily be ported into C using a sequence of nested if/thens that ran much faster, and we stopped using OPS5. It was a great tool for doing the initial development, though.


Looks fun :D, i think that if i ask my manager to build something out of Prolog i would probably get stab... i mean fired since most of us work in OOP. I would love to be that insane one asking for that :D.


You can use https://logtalk.org for oop in Prolog, use it on top of SWI and you have bidirectional bridges to Python an Java

https://www.swi-prolog.org/FAQ/Python.md

https://www.swi-prolog.org/pldoc/doc_for?object=section(%27p...


I only have one thing to say to this man, “hey! Quit stealing my moves!”


> Prolog is not suitable for any problem domain, although this is more readily apparent for some domains than others.

Fuckin' A.


what does that mean?


It's an except from the article. Getting an explanation out of context is worthless.


Excerpt from what article? (edit: i see you're referring to the indented quote from the article, but i was asking what the "fuckin a." thing meant.)



In theory, Prolog is the king of languages. Simultaneously a logical formalism, and (with a resolution system) a language for computation, AND the ultimate meta-programming language as its homoiconic but only goals are evaluated (there is no eager/lazy evaluation fuss - a term is just a term), and goals can only succeed (and have any consequence) if there is already a matching clause.

In practice, there are some very performant and maintained implementations with small but helpful communities.

Also in practice. With all of this power, it's clear that anything could be done (well) in Prolog, but it's not always clear what that way might be. DCGs are an example of a beautiful, elegant, simple, powerful way of building parsers (or state machines) that was not immediately evident to the Prolog community for some time. The perpetual conundrum as a user will be "I could do it this way, but there are certainly better ways of doing this, and I have many avenues I could explore, and I don't know which might be fruitful in what timeline".


What is it like? 50 years of historic cruft. Questionable whether there are more trip hazards than usefulness for ordinary coding. A fractured community which feels like there are more Prolog systems than Prolog code. Learning Prolog is less "how do I do things in Prolog" and more "how do I contort my things to avoid tripping over Prolog?".

A few dedicated clever people and idealists and dreamers talking about ontologies and building things I don't understand, e.g. the link in https://news.ycombinator.com/item?id=40994780 that could either be genuinely "Prolog is suitable for things no other language is" or "Fusion is 10 years away" or "Perpetual motion is here and so is cold fusion!", I can't tell. But I suspect from the lack of visible activity out in the wider world, closer to the latter than the former. Or perhaps the people able to make use of its strengths are few and far between.

There's a saying about driving to a town which has been hollowed out and is now a road through some empty store fronts and car parks: "there's no there there". The soul of a place is missing, it's no longer a destination, just some buildings on some land. Prolog has the opposite of that, a main road straight past it, few buildings or people, but there is a there there - an attractor, spark of something interesting and fun. Buried in years of cruft. Might be a Siren's call though, a trap - but if it is it appears less dangerous than the LISP one.


> A few dedicated clever people and idealists and dreamers talking about ontologies and building things I don't understand

I was briefly deeply interested in ontologies via OWL and I suspect Prolog has the same issues that I think plague ontologies in general.

They are a fantastic tool for a system complex enough to be nearly useless. Modelling an ontology for a reasonably complex domain is unreasonably difficult. Not because the tools are bad, but because trying to define concrete boundaries around abstract ideas is hard.

What is a camera? A naive attempt would say an item that takes pictures, but that would include X-rays. Are deep-space radio telescopes cameras? Trying to fix those issues then causes second order issues; you can say it’s something that takes images from the visible light spectrum, but then night vision cameras aren’t cameras anymore.

The reasoning systems work well, they just don’t solve the hard part of designing the model.


I had similar discussions with people that wanted to encode published research into ontologies. I would ask researchers what they think - the answer was always great idea. I would then follow up with - How would you use it? No response. I finally concluded that it would never happen.

1. No one wanted it enough to pay for it to happen.

2. There is always a turn over of ideas coming and going which can never be sufficiently updated to keep it useful. Again no one would pay anyway.

Tools like LLMs seem to be fill the role now. I would like to see a Prolog integrated with LLMs is someway (lack of imagination fails me how that would happen).


A theorem prover for the medical literature:

https://github.com/webyrd/mediKanren

http://minikanren.org/workshop/2020/minikanren-2020-paper7.p...

Not prolog though. But gives an idea about the goals behind the classification of science papers.


How would you use it? For searches.

If I want to find something in the brain but not in bone structures. If I want to find something in a kind of cell but that have a nucleus.

They are also extremely useful for automated annotation. Your automated system may annotate with a upper term because it doesn't have enough information to be more precise. That's already a big help for a human to come and put a more precise term.

We are at a convergence of technologies, with ontologies, graphs, llms and logic programming. A lot of people were too early on this and discouraged from pursuing further by people that couldn't grasp why it was so important.


This is why Lenat and CYC had settled on micro-theories. They found it impossible to build a useful universal ontology so had to fracture them on domain boundaries.


I was just pondering if the Prolog universal quantifier would be applicable to reasoning about Cyc frames. Does your comment imply it's not?


I'm somewhat familiar with Cyc but I'd never heard of this development of "micro-theories". It makes perfect sense though - to generalize hugely structured ontologies break as soon as the second person tries to use them or they are used on a slightly different domain.

Anyway, Prolog should be suitable for reasoning over them, but it is only grounded in the "micro-theory" part.


your camera example demonstrates that human knowledge is loosely structured and formalized in general, so you can't create strict onthology. One way to work around is to assign some confidence score on statements, so you will have something like that Nikon device is likely camera, and x-ray machine is unlikely camera based on current world model.


I don’t see an issue with saying “X-ray photography machines, and deep-space radio telescopes, are (or at least contains-a, in the case of the telescope) cameras”. They just aren’t ordinary cameras of the sort that a typical person might take a picture with.

I think most of the reasoning you would want to do with a concept of “camera” that excludes X-ray machines and telescopes, but includes night-vision, could be handled with “portable camera”?

Hm, I guess you probably want to include security cameras though..

Ok. “Portable cameras or security cameras”.


An universal ontology cannot have any notion of an "ordinary" camera, not because of expressive limitations but because it's subjective.

Is a CAT machine a camera? Maybe only its sensor and the computers that reconstruct images? Maybe just the sensor? It mostly depends on your location in the supply chain.

Is a box with a projection plane and no means to capture images a camera? Before about 1830, definitely (and then making photographs became a simple upgrade for your "camera obscura").


I don’t think the “before 1830” case is really an issue. That’s just an example of the meaning of words changing.

I didn’t mean that “ordinary camera” should be a term in the formal ontology. I meant something more like “If you want to formalize the notion of ‘a camera’, it should include the CAT machine and telescope. If you want to address only the types of cameras that you think of as ordinary cameras, you should add extra qualifiers to get at what you mean.” .

(Where, “what you mean” might not get the term “ordinary camera”, but something more clear and descriptive.)


Yes, I think that is the experience, for example, in what we called (or call) data science: most of the time is spent in ETLs rather than using ML methods. In a real company linking data difficulty is not technical but time and resource consuming.


Ontology: Study of the nature of being, becoming, existence or reality, as well as the basic categories of being and their relations (philosophy)

What does that have to do with this?

Is there some use of "ontology" in logic I have not heard of?


It would be this version of ontology: https://en.wikipedia.org/wiki/Ontology_(information_science)

Loosely speaking, ontologies are categories of objects defined by their attributes and relationships to other things. Where a hierarchy is a branching structure where items can only appear on the tree once, ontologies do not require everything to stem from a single "root" node and items can appear in the tree in more than one place.

It's a way of working around how hierarchies can't model some things very well. E.g. "bipedal" is an attribute that can apply to both animals and robots; where does it go in a hierarchy that it can apply to both animals and robots without also implying that robots are animals or vice versa.


Domain Driven Design - one of those things like Agile that triggers all sorts of holy wars - has a lot of overlap with the general concept of ontologies, to the point that I've seen some teams formalize all communication between microservices through a shared "ontology", which in reality was essentially a giant XML based descript of valid nouns and verbs that events could use to communicate between services.

Additionally, there's a good deal of overlap with the "semantic web" concept, which itself had a good deal of hype with very limited (but important) application- even the W3C has some published content on how all three fit together: https://www.w3.org/2001/sw/BestPractices/SE/ODA/


Maybe more in philosophy and classic general AI. Basically ontologies are systems of categorizing and classifying knowledge. E.g. if you want to reason about self driving, you would have an ontology that lets you separate traffic signs from billboards.


In this context ontology means common vocabulary/categories.


What do you mean by LISP as a siren call?

I’ve just started learning clojure and besides the lack of static types (which is pretty harsh for me), it seems like a fun and practical language.


Imagine it's, like, 1980 - or even earlier - and you can work in a language roughly as nice as Clojure, except the rest of the world is stuck working with pre-ANSI C or Pascal or FORTRAN or COBOL or raw assembly language. There's no Python or Java or C# or Ruby or Perl or Haskell or Scala or Kotlin or Rust or JS/TS. Nothing really resembling our modern idea of a high-level language.

(OK, there was Smalltalk. Let's ignore Smalltalk. Lord knows everyone else did.)

That'll alter your perception of reality a bit. Here they were, in possession of a tool massively more powerful - and more elegant - than what everyone else is using. And moreover, everyone else took a look at it and turned their noses up.

Clearly, you and your fellow Lisp programmers are a different breed, capable of seeing further than the rest of the unwashed masses. In a word, you were better than them.

It sounds like I'm being disparaging, but to a certain extent, I don't even think this was totally a wrong attitude to have. Elitist, definitely, but not wholly unwarranted. Lisp really was - in terms of expressiveness, anyway - really that far ahead of the competition. And yet somehow that competition won. The world is cruel and unjust.

So Lisp becomes a kind of Us v. Them cult: if you've heard the good word of McCarthy, you're one of Us. If not, you're best ignored - too stupid to possibly have anything worthwhile to say.

(If you think I'm exaggerating, spend some time reading the words of Usenet Lisp institution Erik Naggum - R.I.P. - who serves as the most extreme but hardly the only example.)

This blinded Lisp diehards to the outside world, which slowly but surely, in many respects, began to catch up or even exceed Lisp.

The other thing is - not only is Lisp a powerful language, at its core is a beautiful and simple and expressive mathematical idea. Combine that with the way macros allow you to extend the language virtually infinitely, there can be a near religiosity at the heart of Lisp - from one lambda all things depend. Lisp isn't just good engineering - it's a glimpse at the fundamental nature of computation, of the universe itself.

I'm not going to sit here and tell you that this is somehow a terrible thing, per se. But it can be incredibly alluring to the right kind of mind, and once you're in its thralls it's hard to get out. You might be working with the tool, but in another sense the tool is working with you. A Siren Song.


There's also the not all small factor that there was glam to Lisp.

Early Internet discourse around programming was dominated by people who had ties to elite universities in the 1980s, who yearned for the times when the US Government was throwing an abundance of money to the AI industry of the time.

They were the ones rubbing elbows with researchers from MIT, Stanford, Harvard, and Berkeley, who were using specialized hardware and software beyond the capabilities available to that of developers working on more mundane applications, all graciously funded by DARPA initiatives.

That experience was, in truth, unrelatable to young people reading the recollections of ESR and RMS of the period, the in-jokes of these people, their ideas and interactions, but the tales of Lisp, the Lisp hackers and their fabled Lisp machines would be extremely appealing to someone who was very passionate about programming, striving for excellence as a programmer, and to advance in life through merit. Paul Graham would seal the deal with his essays.


> (OK, there was Smalltalk. Let's ignore Smalltalk. Lord knows everyone else did.)

As someone that used Smalltalk/V on Windows 3.x, was aware of Smalltalk role on OS/2 and SOM, alongside the whole Visual Age line of products before Sun coming up with Java, there were enough people looking around Smalltalk until 1996.


Learning about the curse of lisp is always an eye opening point in one’s career


Wow, thank you for the context!! That was a fun read. And definitely explains some of the stuff I’ve read about Lisp. (I only ever thought to look into lisp because of this xkcd https://xkcd.com/224/)


Clojure is probably the most beautiful language I've ever worked with. Nothing is perfect, but Clojure is very simple and elegant.


Only downside is I don't know Java, so some things that should be obvious are opaque to me.


You really don’t need to know any java. I don’t know java either.

Even if you’re doing java interop, it’s quite easy to figure it out.


Last time I looked about file IO it involves calling out to some Java class that I had no clue about as I don't use Java. All the doc of the time just assumed you should be able to figure this all out.

Edit: it's been like 5 years


The docs are better now (eg https://clojure-doc.org/articles/cookbooks/files_and_directo... ) but you still need to follow a Javadoc link every now and then for the full story.

Alternatively you can use a library such as https://github.com/babashka/fs .

I agree with the GP that you don't need to know any Java.


This was my experience too, at about the same time. I should really dig back into it at some point.


Java frolicks in opaqueness.


What makes Clojure a non-starter for me is that it runs on the JVM.


Could you expand why? It's not immediately obvious why would that be, my understanding is that the general consensus around here is that the JVM is a superb piece of tech with a bad rap due to java the language.


Slow startup, huge memory consumption, design that inherently favors class-based languages like Java, the frequent need to use Java libraries.


> Slow startup

This makes the jvm a bit less suitable for programmes with short lifetimes (like lambdas), depending on how sensitive startup times are in context... But is mostly irrelevant in long-lived applications like services.

> design that inherently favors class-based languages like Java

Clojure abstracts over this so well that it's really a non-issue for a wide array of use cases / applications. When programming in Clojure, you really don't have to think about objects and classes at all, unless you really insist on doing so.

> frequent need to use Java libraries

This would be going against the grain: you will have a much better time by staying within native Clojure. I've worked on commercial/production applications that barely had any java interop, and whatever java interop there was was rarely ever involved in day to day work.

---

Disclaimer: I also am not a huge fan of the JVM and I really dislike the Java world generally, but it never stood in the way of me getting stuff done with Clojure.


core.logic is pretty neat, too. Especially as it applies to this thread and the ancestor comments.


Lisps don't get in your way, but they also have no opinions, which is problematic for community development. As a static type fan, it's the only language I think the pros outweigh than one con.


Who still has nightmares of infinitely nested parenthesis?


The more nightmarish thing about Clojure is realizing that, in truth, you have no idea what all these dicts you are passing around the terse, nil-punning functions of your codebase hold at any given time.


That was the case for me. I went all in drinking the Clojure koolaid and wrote some small internal CLI tools with it. If I came back to that code a month or two later I could only properly understand it if I opened up a REPL to debug it. I ported those tools to Java and they were dead simple to comprehend.


After a while, lesser the magic, the better.

One of the reasons Golang has had a use case in today's age is there is a need for a programming language with just functions, loops and if/else statements.


Yup. I learned Clojure just so I can use a Lisp and get paid for it, but there is some weird cult against all forms of typing. Even coming from a Common Lisp background, this was strange to me. In Common Lisp, there are implementations (like SBCL and ECL) that can make use of type declarations to produce efficient machine code and allow the compiler to catch errors that would otherwise be run-time errors. There's also other benefits like contextual autocomplete. The autocomplete in Clojure tooling is very basic, and many Clojure libraries try to make up for this by using qualified keywords everywhere. That way, rather than seeing all keywords ever interned, you can type ":some.namespace/" and your editor shows a dozen keys instead of hundreds of unrelated keys.

Many in the Clojure community believe that occasionally validating maps against a schema "at the boundaries" is good enough. In practice, I have found this to be insufficient. Nearly every Clojure programmer I know has had to "chase nils" as a result of a map no longer including a key and several functions passing down a nil value until some function throws an exception. (Note: I don't specify which exception, because it depends on how that nil value gets used!)

Refactoring Clojure code in general is a nightmare, and I suspect it is why many in the community are reluctant to change code in existing libraries and build entirely new things in parallel instead. Backwards compatibility is one often-cited reason, but I do think another reason is that refactoring Clojure code creates an endless game of bug fixing unless you have full test coverage of your codebase and use generative testing everywhere. (I've never seen a Clojure codebase with both of these things. I can count on one hand the number of Clojure codebases where generative testing is used at all).

Function spec instrumentation provides something that feels like runtime type checks in Common Lisp, but now you have to manually run certain functions at the REPL just to ensure some change in your codebase did not introduce a type error.

On the flip side, Java has things like DTOs which always felt too boilerplate-ish for me (though at least it provides useful names for endpoint data when generating Swagger/OpenAPI documentation). Even then, records in Java provide what are essentially maps with type safety and similar characteristics as DTOs.

I think the structural typing offered by languages like OCaml and TypeScript provide exactly what I'd want in Clojure. But when faced with feature requests in Clojure, people will state something like "I have never had a use-case for X, therefore you don't need X". In the case of criticisms, the response is often "I may have ran into X before, but it's so rare that I don't consider it a problem".


I still don't get how Java records can be used for anything like a DTO. Since you're a Clojure dev you may remember the pattern Rich Hickey described as "place oriented programming" :) Nearly every endpoint will have more than 2-3 fields and you really don't want a Java record with more than that many fields for the same reason you don't want a Java method with that many fields e.g. doIt(Long, String,String,Long,String,int,int,String) <-- code smell.

And the problem I always see is something may start off as a Java record and then need to be refactored into a class as soon as 1-2 more fields are added.


> I still don't get how Java records can be used for anything like a DTO

In Clojure, we often deserialize a result set from a database to a vector of maps. These maps have different keys depending on what exactly your query was selecting. In Java, one often "projects" results to some DTO. This is one scenario where records offer identical functionality while avoiding boilerplate.

Regarding "place-oriented programming", records are immutable, so that is one technical advantage they have over handwriting a DTO. And from my short experience using web frameworks like Quarkus, it seems that a lot of the "design patterns" I see in documentation exist to help design easy-to-test programs rather than unfettered mutation.

Additionally, I have found records useful for describing the payload of endpoints that accept map-like data. Without records, I would be writing POJOs with public fields anyway.

Overall, Java records behave like TypeScript interfaces with awkward syntax. I have found them ideal for expressing type-safe, map-like data with minimal boilerplate.


I'm also using it for projections. But to be honest we have a fairly large Quarkus app and only use projections in a few places. For immutable classes with a large number of fields where instances have to be created manually I usually use the builder pattern. But the analogy to typescript interfaces is interesting.



Typed Clojure is interesting. However, until two months ago, it was practically unusable for most Clojure projects because of the lack of type inference in higher-order functions. This has changed, but there's another huge problem: nobody maintains type declarations for widely used libraries. If you look at alternatives to TypeScript, such as ReScript, you will find similar issues.

I still use Clojure, but I am fully aware of the kind of bugs to expect down the road. Typed Clojure would work if I could maintain types for each library I use, but that is simply too much effort on my part.

As for clojure.spec, I already addressed this in my post, but I will state the following.

Schema, Malli, and Spec are not substitutes for a type system. Each of these libraries explicitly state so. You still need to enable instrumentation and run erroneous code violating some contract. Most Clojure programmers have the habit of enabling instrumentation in dev and disabling it in production because validation is an expensive operation. I personally use Malli for data validation and coercion, but it does not make refactoring any easier, nor does it help autocomplete and other development-related tooling.

(Someone will probably link a malli document demonstrating clj-kondo linter generation, but even that is not a substitute. At best it detects arity errors and primitive type mismatches, not the shape of data in a map).


They seem to disappear with parinfer.


The "magic" of Prolog is built upon two interesting concepts : Unification ( https://en.wikipedia.org/wiki/Unification_(computer_science)... ) and Backtracking ( https://en.wikipedia.org/wiki/Backtracking ).

Often bad teachers only present the declarative aspect of the language.

By virtue of being declarative, it allows to express inverse problems in a dangerously simple fashion, but doesn't provide any clue for a solution. And you are then using a declarative language to provide clues to guide the bad engine toward a solution. Making the whole code an awful mashup of declarative and imperative.

Rules :

- N integer, a integer > 1, b integer > 1

- N := a * b

Goal :

N = 2744977

You can embed such a simple problem easily but solving it is another thing.

The real surge of Prolog and other declarative constraint programming type of language will be when the solving engines will be better.

Unification is limited to the first order logic, high-order logic unification is undecidable in the general case. So we probably will have to rely on heuristics. By rewriting prolog goal solving as a game, you can use deep learning algorithms like alphago (Montecarlo tree search).

This engine internally adds intermediate logical rules to your simply defined problem, based on similar problems it has encountered in its training set. And then solve them like LLM, by picking the heuristically picking the right rule from intuition.

The continuous equivalent in a sort of unification is Rao-Blackwellisation (done automagically by deep-learning from its training experience) which allows to pick the right associations efficiently kind of the same way that a "most general unification algorithm" allows to pick the right variable to unify the terms.


> The continuous equivalent in a sort of unification is Rao-Blackwellisation (done automagically by deep-learning from its training experience) which allows to pick the right associations efficiently kind of the same way that a "most general unification algorithm" allows to pick the right variable to unify the terms.

I don't know how to reconcile this statement about deep learning with my understanding of Rao-Blackwell. Can you explain:

- what is the value being estimated?

- what is the sufficient statistic?

- what is the crude estimator? what is the improved estimator?

Roughly, I think sufficient statistics don't really do anything useful in deep learning. If they did, they would give a recipe for embarassingly parallel training that would be assured to reach exactly the same value a fully sequential training. And from an information geometry perspective, because sufficient statistics are geodesics, the exploratory (hand-waving) and slow nature of SGD could be skipped.


Once you view prolog goal reaching as a game. You can apply Reinforcement Learning methodologies. The goal is writing a valid proof, aka a sequence of picking valid rules and variables assignment.

Value being estimated : The expected discounted reward of reaching the goal. The shorted the proof the better.

The sufficient statistic : The embedding representation of the current solving state (the inner state of your LLM (or any other model) that you use to make your choices). You make sure it's sufficient by being able to regenerate the state from the representation (using an auto-encoder or vae does the trick). You build this statistic across various instances of problems. This tells you what is a judicious choice of variable based on experience. Similar problems yield similar choices.

The crude estimator : All choices of have the same value therefore a random choice, The improved estimator : The choice value is conditioned on the current embedding representation of the state using a Neural Network.

You can apply Rao-Blackwell once again, based by also conditioning one-step look-ahead. (Or at the limit applying it infinitely many times by solving the bellman equation.)

(You can alternatively view each update step of your model, as an application of Rao-Blackwell theorem on your previous estimator. You have to make sure though that there is no mode collapse.)

You don't have to do it explicitly, it happens under the hood by your choice of modelisation in how to pick the decision.


A unique property of Prolog is that, given an answer, it can arrive at the original question (or, a set of questions – speaking more broadly).

Or, using layman terms, a Prolog programme can be run backward.


To be precise, a small number of very small Prolog programs can be run backwards.

There are essentially no significant Prolog programs that are reversible with acceptable efficiency.


To be even more precise, Prolog programs only ever run forward because the order of evaluation is fixed as top-down, left-to-right. These notions of "forward" and "backward" are very unhelpful and should be given up. Beginners find the order of evaluation hard enough to understand, let's not confuse them even more.

Also, the notion is woefully incomplete. Let's say we consider this "forward":

    ?- list_length([a, b, c], Length).
    Length = 3.
Then you would say that this is "backward":

    ?- list_length(List, 3).
    List = [_A, _B, _C].
Fine, but what's this then? "Inward"?

    ?- list_length([a, b, c], 3).
    true.
And then presumably this is "outward":

    ?- list_length(List, Length).
    List = [], Length = 0 ;
    ...
    List = [a, b, c], Length = 3 .
None of these cases change the order of evaluation. They are all evaluated top-down, left-to-right. The sooner beginning Prolog programmers understand this, the better. The sooner we stop lying to people to market Prolog, the better.


What's going on in the second example? Did Prolog generate a list term stuffed with gensym variables, which satisifies the list_length being 3?


It generated a list stuffed with distinct logical variables. Any list of length 3 is unifiable with this list. Or in other words, this is the unique (up to variable renaming) most general list of length 3.

Whether this is a "yes" to your question depends on what your mental model of "gensym variables" is. They are variables, not symbols (which Prolog would call atoms).


I was largely joking. Even though the capability is there, it is not computationally practical nor possible to accomplish such a feat for any sufficiently complex programme.

In the most extreme case, attempting to run a complex Prolog programme backwards will result in an increase in entropy levels in the universe to such an extent that it will cause an instant cold (or hot) death of our universe, and life as we know it will perish momentarily (another tongue in cheek joke).


the bidirectional (relational) aspect of prolog is what got me into this. I love symmetries so it was a natural appeal even before I learned about logic programming (Sean Parent made a google talk about similar ideas implemented in cpp). That said it's very limited. But I wonder how far it could go. (the kanren guys might have more clues)


Do you see a good way to include backtracking in an imperative programming language?

I can imagine how unification would work, since the ubiquitous "pattern matching" is a special case of Prolog's unification. But I've never seen how backtracking could be useful...


backtracking is perilous in general; logic programming languages have really nice abilities for such but I don't know how to avoid pathological inefficiency.


With memoization as in tabling (a.k.a. SLG-Resolution):

https://www.swi-prolog.org/pldoc/man?section=tabling

Re-evaluation of a tabled predicate is avoided by memoizing the answers. This can realise huge performance enhancements as illustrated in section 7.1. It also comes with two downsides: the memoized answers are not automatically updated or invalidated if the world (set of predicates on which the answers depend) changes and the answer tables must be stored (in memory).

Known to the Prolog community since about the 1980's if I got my references right.


Girard has some commentary scattered about his writing.

The search algorithms for logic programming are simply slow, it's a very interesting idea in programming languages, but there's a reason it's not widely used.

> PROLOG, its misery. Logic programming was bound to failure, not be- cause of a want of quality, but because of its exaggerations. Indeed, the slogan was something like « pose the question, PROLOG will do the rest ». This paradigm of declarative programming, based on a « generic » algorithmics, is a sort of all-terrain vehicle, capable of doing everything and therefore doing everything badly. It would have been more reasonable to confine PROLOG to tasks for which it is well-adapted, e.g., the maintenance of data bases.

> On the contrary, attempts were made to improve its efficiency. Thus, as systematic search was too costly, « control » primitives, of the style « don’t try this possibility if... » were introduced. And this slogan « logic + control13 », which forgets that the starting point was the logical soundness of the deduction. What can be said of this control which plays against logic14? One recognises the sectarian attitude that we exposed several times: the logic of the idea kills the idea.

> The result is the most inefficient language ever designed; thus, PROLOG is very sensitive to the order in which the clauses (axioms) have been written.


This is a great quote and sadly true. What text is this from?


For me this kind of criticism is very familiar. It comes from theoretical computer scientists who have these purist ideological convictions about how a declarative language should look and behave, that are as unrealistic, because impossible to implement on a real-world computer, as they are uninteresting for practicing programmers because strictly a matter of aesthetics. Such critics have never made anything useable themselves and are simply angry that someone else made something that works in the real world while they were busy intellectually masturbating over their pure and untouchable vision.

Although I concede that my comment might be a bit unfair to Girard who did, after all, invent the mustard watch.


This comment is idiotic.


No, what's idiotic is the trite bullshit in the quote in the GP's comment- and from a logician from Aix-Marseille, no less!

Prolog is "well adapted" to the "maintenance of databases". The only reason this nonsense keeps being repeated is because Prolog programs are stored as rows in a database. It's like people look at a list of keywords, pick out "database" and go "ah, so Prolog is a language for databases". Zero understanding of what the database is in there for: because your program and your data are one.

Or take the "attempts" that "were made to improve its efficiency". What the illustrious academic is kvetching about here is the cut (!/0) an extra-logical construct used in Prolog to cut choice points (like markers in program state where execution backtracks to) and so lets the programmer control the program. Again, what we seem to have here is a bingo-card understanding of Prolog: someone wrote down the keyword "control", the academic looked at the cut and thought "ah, that's what 'control' means!". No, it means that an algorithm can be thought of as logic, that is always the same, and control, that depends on the executing machine. That's what "algorithm = logic + control" means, not that you get to cut choice points with a "control" structure.

That's what's idiotic, and btw that's the common misunderstandings that clueless Prolog "critics" have been making since forever. It's trite, tired, boring bullshit that makes it clear the "critic" has no idea what they're talking about and are just looking for something to say to show they're knowledgeable and smart.


"The Blind Spot: Lectures on Logic" by Jean-Yves Girard


Though i only know Prolog cursorily it is in my todo list of languages to study. I think it has great value in that it teaches you a different paradigm for programming.

You might also want to look at Erlang which is used in the Industry and would be helpful for your future. Joe Armstrong was originally inspired by Prolog and he conceived Erlang as Prolog-Ideas+Functional/Procedural+Concurrency+Fault-Tolerance. Hence you might find a lot of commonalities here. Here is a recent HN thread on a comparison - https://news.ycombinator.com/item?id=40521585

There is also "Erlog" (by Robert Virding, one of the co-creators of Erlang) which is described as, Erlog is a Prolog interpreter implemented in Erlang and integrated with the Erlang runtime system. It is a subset of the Prolog standard. An Erlog shell (REPL) is also included. It also says, If you want to pass data between Erlang and Prolog it is pretty easy to do so. Data types map pretty cleanly between the two languages due to the fact that Erlang evolved from Prolog. - https://github.com/rvirding/erlog


Sure, Erlang was prototyped on Prolog because Prolog has excellent built-in facilities for domain-specific languages: you can define new unary or binary operators along with priorities and associativity rules (you can use this to implement JSON or other expression parsing in like two lines of code, which is kindof shocking for newcomers, but comes very handy for integrating Prolog "microservices" into backend stacks), and you get recursive-decent parsing with backtracking for free as a trivial specialization of Prolog evaluation with a built-in short syntax (definite clause grammars) even.

But apart from syntax, Erlang has quite different goals as a backend language for interruption-free telco equipment compared to Prolog.


In The Development of Erlang Joe Armstrong says "We concluded that we would like something like Prolog with added facilities for concurrency and improved error handling".

See pdf linked here - https://news.ycombinator.com/item?id=40998632


You're reading too much into that quote. This is in a section titled "early experiments". It was an initial goal.

There is a lot of historical connection to Prolog, due to the original implementation, and there are syntactic similarities and non-linear pattern matching and dynamic types and a general declarative vibe, but the actual end result of Erlang's evolution, despite the goal of "something like Prolog", is not very much like Prolog at all. Erlang is a functional language, not a logic language. Prolog is a logic language, not a functional language. General goals like in that quote can change over the decade-long development of a language.


You are stating some obvious things which are not what is being argued here. I only gave the above quote because it is the conclusion that the Erlang designers came to after a lot of research/playing/implementing with other languages (read the full paper and others listed below). I had also stated above that "he conceived Erlang as Prolog-Ideas+Functional/Procedural+Concurrency+Fault-Tolerance." So we already know Erlang is a different language. Note that Robert Virding also states in the Erlog page "due to the fact that Erlang evolved from Prolog". So obviously Prolog was a huge influence on Erlang design (not to be confused with the fact that the first experimentation was done in Prolog) in addition to other systems/languages. The above paper also states "It was a strange mixture, with declarative features (inherited from Prolog), multi-tasking and concurrency (inherited from EriPascal and Ada) and an original combination of error handling mechanisms". The last came from AXE/PLEX and others.

Joe Armstrong wrote two papers The Development of Erlang (linked here https://news.ycombinator.com/item?id=40998632) and a longer A History of Erlang (pdf at https://www.labouseur.com/courses/erlang/history-of-erlang-a...). In addition to his thesis (pdf at https://erlang.org/download/armstrong_thesis_2003.pdf) they provide a fascinating study into what goes into the design of a language i.e. lots of messy experiments, shifting goals, inspiration/features from many different languages etc. until everything coalesces into a organic whole which is then validated by users. Reading the above two papers will give you a more complete picture of Prolog's influence on Erlang (in addition to others).


Thanks, I know that history, and nobody is denying the influence. The broader context are your claims about "a lot of commonalities here". There are commonalities, but they are shallow.


That we have to disagree. Leaving aside the non-Prolog features, much of the Erlang syntax is definitely inspired by Prolog (mentioned in many papers including https://www.erlang.org/faq/academic). That was the reason the HN submission i had referenced which linked to a reference sheet comparing Prolog/Erlang/Elixir was so interesting (direct link https://hyperpolyglot.org/logic). In most cases the Erlang syntax is just a simplified version.


I mentioned the syntactic similarities (and more) above. Syntax is shallow.


Yeah but to be honest, Erlang ended up being not something like Prolog at all.

I think Joe Armstrong was a user here and I interacted with him waaaay way back when I first joined. He's dead now :(



I saw, as I saw your upthread quote from the paper above but I think you misunderstand it. The previous paragraph gives context to your quote:

  The main conclusion [5] was that declarative language
  programs for POTS were a lot shorter and easier to
  understand than imperative language programs. Un-
  fortunately the declarative languages lacked features
  for concurrency control and had poor error handling
  facilities.

  We concluded that we would like something like Pro-
  log with added facilities for concurrency and improved
  error handling. No such language existed at the time.

"Something like Prolog" here reefers to the declarative features found in Prolog as well as other declarative languages, left unnamed in the quote, which make programs "a lot shorter and easier to understand thatn imperative language programs".

But that's where the "commonalities" you mention in your OP, between Prolog and Erlang, end. Similarly, CSS and XML are declarative but that's where their "commonalities" with Prolog (and Erlang) end.

I'm insisting with this because I'm concerned that your comment promulgates a common misconception about the, like you say, "commonalities" between Prolog and Erlang. These end with syntactic similarities and misunderstanding this can cause some disappointment to people trying to go from one to the other. I've seen a similar misunderstanding arise about "commonalities" between Prolog and, e.g. Haskell - they both have weird, arcane syntax and immutable data structures, but that's all. Or, think of C and javascript: both Algol like syntax and in fact the first js compilres must have been written in C, but that's all. etc.

There is much, much more to Prolog than the declarative syntax.


First, my other comments for reference; https://news.ycombinator.com/item?id=41013798 and https://news.ycombinator.com/item?id=41015406

I had explicitly stated above that "he conceived Erlang as Prolog-Ideas+Functional/Procedural+Concurrency+Fault-Tolerance." Here i am not making an equivalence between Prolog and Erlang but emphasizing the inspiration that Prolog provided for the design of Erlang (that is what is to be understood from all the links to the papers and comparison charts given in earlier comments). That inspiration is in the syntax of pattern matching, Atoms/Variables, Module/Export directives etc. Similarity of Syntax is very important for learning/understanding new languages since your cognitive load is decreased dramatically. A good example is C++ to Java/C# where the similarity in syntax (though the runtime object model is very different) is what was crucial to the widespread adoption of the latter. It is in this sense that Prolog to Erlang comparisons should be understood.

Your arguments of grouping/comparing widely dissimilar languages are somewhat disingenuous. A much better side-by-side comparison of languages is https://hyperpolyglot.org/ where the author has tried to group by intended functionality and historical development. I think this is a good way to do it.


>> It is in this sense that Prolog to Erlang comparisons should be understood.

What I'm saying is that this is the entirely wrong thing to focus on, i.e. syntactic similarities. Prolog is a language of the logic programming paradigm, one of the first ones. Its syntax is that of (a restriction of) the first order predicate calculus. Its programs are First Order Logic theorems. Its interpreter is an automated theorem prover. The motivation for Prolog is the ability to program a computer using the syntax and semantics of First Order Logic, to be able to prove the truth values of statements in a formal language automatically, with a computer. It has nothing to do with Erlang, a functional programming language designed to program telephony switches. Any similarity is superficial: Erlang expressions are not definite clauses; everything in a Prolog program is a definite clause, modulo punctuation. Knowing Erlang will not help you learn Prolog just because the syntax looks similar. You can forget about that right now- that's the misconception I'm trying to correct. Don't encourage people to try to understand Prolog that way because you will only cause them pain.

Take the site you link to as an example. It tries to bodge together Prolog with Erlang and Elixir in a "side-by-side reference sheet" that includes rows for "assignment" and "parallel assignment" using =/2. That's the unification predicate! Prolog does not have assignment! Just imagine the suffering of a novice programmer trying to use their knowledge of assignment, in Erlang or any language, to understand the following:

  ?- X = a, X = b.
  false.
That's just setting up programming students to fail, to fail to understand, to fail to learn- and to only succeed in blaming Prolog for being a stupid, painful language that is hard to learn. Of course it's hard to learn! If you go around telling people that they can learn it more easily if they know Erlang!

>> Your arguments of grouping/comparing widely dissimilar languages are somewhat disingenuous.

No, the point is that they are widely dissimilar and that you won't understand that if you just stop at the syntax. Like the site you link to, where someone clearly made an effort to memorise syntax but completely failed to understand semantics.

Semantics shmantics! The attitude I see here is the one that Peter Norvig criticises in "Teach yourself programming in 10 years": try to find a shortcut around the hard stuff so you don't have to use the brain. You don't learn anything that way.


You are on a completely different tangent here. To repeat myself, we are not making a equivalence comparison between Prolog and Erlang. That their computation models are different is well-known. The OP is already studying Prolog and therefore has some familiarity with its computation model. He/She is asking for advice on what more to study. What is being suggested is that since Erlang was inspired by Prolog (to whatever extent) it would be useful for the OP to study that too since it has more usage/acceptance in the industry given the concurrent/distributed architectures in vogue today.

> Knowing Erlang will not help you learn Prolog just because the syntax looks similar.

That is not what is being suggested. This is your fundamental misconception which permeates the rest of your post. The OP is already studying Prolog and is being advised to look into Erlang too in addition to that. It is not "instead of" but "in addition to" (due to the history of Erlang). Furthermore Erlog allows you to embed Prolog within Erlang giving the OP the best of both worlds.

> you won't understand that if you just stop at the syntax.

Again this is your misconception. That is not what is being suggested. Nobody in their right mind will say "stop at the syntax". But syntax is the gateway into the study of computer languages i.e. you first show the syntax and then explain its semantics. Humans learn new things using analogies/similes/metaphors/etc. Here similar syntax is a great help since it eases the cognitive load while learning new concepts. When the semantic model varies, the dissonance may not be too great and so we can better modify our mental models and understanding.

The website listing the syntax comparisons of various languages is actually pretty useful when looked at with the above viewpoint. Start with your known syntax in a language you know, see how it maps to the same/similar syntax in the language you want to study and then lookup/compare/contrast the semantics of it in both the languages. It is like a cheat-sheet while studying a proper programming language textbook.


>> You are on a completely different tangent here.

I don't think so. In your comment above you keep saying that it's not about syntax, then immediately after you switch to explaining how it really is. Notice this for instance. I say:

>>> Knowing Erlang will not help you learn Prolog just because the syntax looks similar.

You say:

>> That is not what is being suggested.

Then you go on to say:

>> But syntax is the gateway into the study of computer languages i.e. you first show the syntax and then explain its semantics. Humans learn new things using analogies/similes/metaphors/etc. Here similar syntax is a great help since it eases the cognitive load while learning new concepts.

And:

>> Start with your known syntax in a language you know, see how it maps to the same/similar syntax in the language you want to study and then lookup/compare/contrast the semantics of it in both the languages. It is like a cheat-sheet while studying a proper programming language textbook.

So it's hard for me to see how you're not suggesting that syntax will help you learn Prolog, when you're arguing that syntax helps you learn a new language.

And what I'm saying is that knowing something about the syntax of Erlang will not help you learn anything about Prolog. I don't think you can accept that at all, but it's true and it's my experience of many years watching people, including myself, trying to learn Prolog using their knowledge of other languages, either their syntax or semantics. My experience includes working as a teaching assistant for a Prolog course during my PhD. The truth is that Prolog is not an easy language to learn because it works very differently than any other programming language and you won't be able to use your knowledge of any other programming language to learn it. If you want to learn Prolog you have to start by finding a way to set aside most of what you know about programming. That is very hard to do, and so it's very hard to learn Prolog, except at a very superficial level.

I appreciate that's not easy to accept without long time experience.

But I don't think we're getting anywhere with this conversation so I'll respectfully bow out and thank you for the patient and civil exchange of views.


Again, the OP is moving from Prolog -> Erlang and not the other way around. You keep repeating "So it's hard for me to see how you're not suggesting that syntax will help you learn Prolog," but that is not OPs path. Erlang evolved out of Prolog and hence some of the syntax/semantics are very similar.

To settle this once and for all; in your previous post you gave the example of assignment which actually proves my point that you can easily go from Prolog to Erlang. Why? Because the behaviour is the same in Erlang i.e. there is no assignment but only pattern-matching. You can open a Erlang shell and type in X=a, X=b and it will barf. To show it even more convincingly type in Y=1+2, Y=3 and Y=4 and it will barf only on the last line. Lhs=Rhs are pattern-matched and variables are single assignment. A person with prior exposure to Prolog already knows this and hence can easily map it to Erlang. Only people coming from imperative languages (eg. C) need to adjust their mental model of the "=" operator. When i started with Erlang years ago that was what i had to learn first but thankfully all Erlang books cover it in the introductory chapter itself. Also functions in Erlang are written as a series of clauses which are pattern-matched.

I presume you do not have experience with Erlang and so your experience with only Prolog blinds you to the similarities. I highly recommend you get a Erlang book (Joe Armstrong's book is a good one since he uses much of the terminology like "terms", "clauses" etc. from Prolog) and with the help of the cheatsheet just try out the basic syntax in the Erlang shell and see what maps to Prolog. I am quite sure that you will find it convincing.

Finally w.r.t. Prolog's computation model being based on Predicate Calculus/First-Order Logic and so the model/semantics are very important is not being denied. I myself came to Predicate Calculus from the "Program Correctness" viewpoint of Floyd/Hoare/Dijkstra and hence an quite aware of its intricacies. But for most programmers without such a background in Set theory/Relations/Logic model building takes time and so syntax is their entry point and understanding happens over a period of time with study and trial-and-error.


Ha! That explains a lot. I've started looking into Prolog recently, and there were some... familiar echoes in there, reminiscent of Erlang.

But of course, the submarine is like a cigar, not cigar like a submarine.


The Development of Erlang by Joe Armstrong (pdf) - https://dl.acm.org/doi/pdf/10.1145/258948.258967


Prolog is a really interesting language. It's like lisp in that, it's definitely worth learning very well, even if you don't find a use-case for it, because the things you learn help you think about programming in a whole new way.

The prolog community is pretty active. SWI has a discourse group. There's SWISH, CLP(FD/Z), abduction via CHR (a rewrite system) or libraries like ACLP. Prolog is homoiconic, and it achieves it in a unique way, via things like functor/3 and =../2 rather than a macro system. There's growing interest in ISO-standard, pure, monotonic prolog for writing large, clean prolog codebases. SWI is the most mature prolog, but Scryer and Trealla are very active and ISO conformant. Trealla is quite embeddable, particularly in javascript codebases. There's also janus for python, and the community is looking to integrate prolog with LLMs.

Prolog shines for writing bidirectional parsers, NLP, expert systems, abductive reasoning, and constraint logic programming. Pure monotonic prolog has some very useful properties in terms of debuggability, making it useful for large prolog programs. There's also some interesting work in developing pure io (library(pio)). Prolog also has a few different techniques for coroutining, including shift/reset. Markus Triska has a very nice youtube series and book on prolog that's worth watching/reading.

The main downside to prolog is really just that there's a steep learning curve to it that puts a lot of people off and prevents it from gaining more traction, similar to why langs like lisp, haskell, and idris have trouble gaining traction. SWI has a lot of features, but it's also not ISO conformant, and a lot of libraries aren't portable and/or feel very procedural/imperative, which defeats the purpose of prolog. The useful libraries can often be ported to less popular prologs that are more promising, like scryer and trealla. For example, I managed to port ACLP to trealla yesterday without much effort, which is a pretty useful abductive system for writing expert systems or any sort of abductive reasoning.


You can also change the search strategy used in Prolog, ie using library(search), supporting BFS and iterative deepening. Tabling is also supported.

Another useful tool for homoiconicity is clause/2:

?- assertz((foo(X) :- append(X, _, [1,2,3]))).

true.

?- clause(foo(X), Body).

Body = append(X, _, [1, 2, 3]).

If you really like Haskell and OCaml's pattern matching, you'll probably really love Prolog. Prolog's pattern matching is much more powerful.


Not sure about Prolog itself but Datalog really needs to overtake SQL, it's just so much better.

Related areas like constraint programming are still very relevant.


Could you explain more or point out some interesting references? I'm currently trying to understand how Datalog compares to SQL and, potentially GraphDBs


TypeDb is a practical Datalog-based database system [1] (with a different syntax). TerminusDb is a project in a similar vein [2], but actually an RDF store at its core. If you want to experiment with the connections between Datalog, relational algebra, and SQL, check out the Datalog Educational System. And if you want to jump into the theory, Foundations of Databases (the "Alice book") is very thorough but relatively readable [4]! Oh, and there's a Google project, Logica, to do Datalog over Postgres databases [5].

[1]: https://typedb.com/ [2]: https://terminusdb.com/ [3]: http://www.fdi.ucm.es/profesor/fernan/des/ [4]: http://webdam.inria.fr/Alice/ [5]: https://github.com/evgskv/logica


Mangle is a language that includes "textbook datalog" as a subset https://github.com/google/mangle ; like any real-world datalog language, it extends datalog with various facilities to make it practical.

It was discussed on HN https://news.ycombinator.com/item?id=33756800 and is implemented in go. There is the beginnings of a Rust implementation meanwhile.

If you are looking for datalog in the textbooks, here are some references: https://github.com/google/mangle/blob/main/docs/bibliography...

A graph DBs short intro to datalog: just like the edges of a graph could be represented as a simple table (src, target), you could consider a database tuple or a datalog or prolog fact foo(x1, ..., xN) as a "generalized edge." The nice thing about datalog is then that as one is able to express a connections in an elegant way as "foo(...X...), bar(...X...)" (a conjunction, X being a "node"), whereas in the SQL world one has to deal with a clumsy JOIN statement to express the same thing.


Don't have any interesting references, sorry. My reasoning is mainly one of simplicity and power. In SQL you need to think in terms of tables, inner joins, outer joins, foreign keys etc. whereas datalog you do everything with relations as in prolog.

Not only is it conceptually much simpler, it's also a "pit of success" situation as thinking in terms of relations instead of tables leads you towards normal forms by default.

Add the ability to automatically derive new facts based on rules and it just wins by a country mile. I recommend giving Soufflé a try.

I haven't worked with GraphDBs enough to comment on that.


Prolog and Datalog example (they are identical in this case)

    % Facts
    parent(john, mary).
    parent(mary, ann).
    parent(mary, tom).

    % Rules
    ancestor(X, Y) :- parent(X, Y).
    ancestor(X, Z) :- parent(X, Y), ancestor(Y, Z).

    % Query
    ?- ancestor(john, X).
The Prolog code looks identical to Datalog but the execution model is different. Prolog uses depth-first search and backtracking, which can lead to infinite loops if the rules are not carefully ordered.

Datalog starts by evaluating all possible combinations of facts and rules. It builds a bottom-up derivation of all possible facts:

a. First, it derives all direct parent relationships.

b. Then, it applies the ancestor rules iteratively until no new facts can be derived.

For the query ancestor(john, X):

It returns all X that satisfy the ancestor relationship with john. This includes mary, ann, and tom. The order of rules doesn't affect the result or termination. Datalog guarantees termination because it operates on a finite set of possible facts.

Prolog uses a top-down, depth-first search strategy with backtracking.

For the query ancestor(john, X):

a. It first tries to satisfy parent(john, X). This succeeds with X = mary.

b. It then backtracks and tries the second rule: It satisfies parent(john, Y) with Y = mary. Then recursively calls ancestor(mary, X).

c. This process continues, exploring the tree depth-first.

Prolog will find solutions in this order: mary, ann, tom.

The order of clauses can affect both the order of results and termination: If the recursive rule were listed first, Prolog could enter an infinite loop. Prolog doesn't guarantee termination, especially with recursive rules.

SQL is more verbose. The equivalent of the Datalog/Prolog example above is:

    -- Create and populate the table
    CREATE TABLE Parent (
        parent VARCHAR(50),
        child VARCHAR(50)
    );

    INSERT INTO Parent VALUES ('john', 'mary');
    INSERT INTO Parent VALUES ('mary', 'ann');
    INSERT INTO Parent VALUES ('mary', 'tom');

    -- Recursive query to find ancestors
    WITH RECURSIVE Ancestor AS (
        SELECT parent, child
        FROM Parent
        UNION ALL
        SELECT a.parent, p.child
        FROM Ancestor a
        JOIN Parent p ON a.child = p.parent
    )
    SELECT DISTINCT parent AS ancestor
    FROM Ancestor
    WHERE child IN ('ann', 'tom');
This is a more interesting example of how one might use Datalog on a large dataset:

    % Define the base relation
    friend(Person1, Person2).

    % Define friend-of-friend relation
    friend_of_friend(X, Z) :- friend(X, Y), friend(Y, Z), X != Z.

    % Define potential friend recommendation
    % (friend of friend who is not already a friend)
    recommend_friend(X, Z) :- friend_of_friend(X, Z), not friend(X, Z).

    % Count mutual friends for recommendations
    mutual_friend_count(X, Z, Count) :- 
        recommend_friend(X, Z),
        Count = count{Y : friend(X, Y), friend(Y, Z)}.

    % Query to get top friend recommendations for a person
    top_recommendations(Person, RecommendedFriend, MutualCount) :-
        mutual_friend_count(Person, RecommendedFriend, MutualCount),
        MutualCount >= 5,
        MutualCount = max{C : mutual_friend_count(Person, _, C)}.
The equivalent Postgres example would be:

    WITH RECURSIVE
    -- Base friend relation
    friends AS (
        SELECT DISTINCT person1, person2
        FROM friendship
        UNION
        SELECT person2, person1
        FROM friendship
    ),

    -- Friend of friend relation
    friend_of_friend AS (
        SELECT f1.person1 AS person, f2.person2 AS friend_of_friend
        FROM friends f1
        JOIN friends f2 ON f1.person2 = f2.person1
        WHERE f1.person1 <> f2.person2
    ),

    -- Potential friend recommendations
    potential_recommendations AS (
        SELECT fof.person, fof.friend_of_friend, 
            COUNT(*) AS mutual_friend_count
        FROM friend_of_friend fof
        LEFT JOIN friends f ON fof.person = f.person1 AND fof.friend_of_friend = f.person2
        WHERE f.person1 IS NULL  -- Ensure they're not already friends
        GROUP BY fof.person, fof.friend_of_friend
        HAVING COUNT(*) >= 5  -- Minimum mutual friends threshold
    ),

    -- Rank recommendations
    ranked_recommendations AS (
        SELECT person, friend_of_friend, mutual_friend_count,
            RANK() OVER (PARTITION BY person ORDER BY mutual_friend_count DESC) as rank
        FROM potential_recommendations
    )

    -- Get top recommendations
    SELECT person, friend_of_friend, mutual_friend_count
    FROM ranked_recommendations
    WHERE rank = 1;
Full example you can run yourself: https://onecompiler.com/postgresql/42khbswat


> Prolog uses depth-first search and backtracking, which can lead to infinite loops if the rules are not carefully ordered

Is this an issue in practice? Most languages can create programs with infinite loops, but it's easy to spot in code reviews. It's been over a decade since I encountered an infinite loop in production in the backend. Just wondering if the same is true for Prolog.


Here's an infinite loop in Prolog, getting the length of a list:

    length(List_of_animals, Len)
Oops, List_of_animals hasn't been bound to any value, so length/2 will backtrack forever making it a longer and longer list of empty placeholders. Nothing will warn you that the variable wasn't declared because that's also a normal thing to do. Here's another, checking if something is in a list:

    member(cat, List_of_animals)
same problem, if the list isn't grounded to a fixed length list by the time this line executes, backtracking will generate longer and longer lists with `cat` in them and lots of placeholders:

    [cat]
    [_, cat]
    [_, _, cat]
    ...
forever. It's not just that you can accidentally write an infinite for(;;) loop by typoing the exit condition, it's that a lot of things in Prolog can be used in ways which finish deterministically or in ways that act a bit like Python generators yielding endless answers. So it's about the context in which you call them, and the surrounding code. e.g. one reason you're using Prolog is that you want it to generate List_of_animals for you (making up fictional animal names, or something), so you can't look for a missing `List_of_animals = [...]` because there might not be one anywhere.


> Nothing will warn you that the variable wasn't declared because that's also a normal thing to do.

Minor nitpick regarding an otherwise good answer: Prolog systems will warn you about "singleton variables", that is, variables with exactly one occurrence. This does catch the usual cases of this kind of error.


The OP is asking whether that is an issue "in practice" and then points out the rarity of infinite loops "in production in the backend" [1].

For me, while that kind of thing gets me once in a while it never makes it to my final commits. That's because I always test every predicate I add to a program in isolation. Hey, sometimes I even write unit tests! So my experience is that the ability to write and test your program in sizeable chunks makes up for the danger of unbound variables causing infinite loops, in practice.

Also note that some Prologs have helpful error messages that direct the user to the problem. E.g. in SWI-Prolog (in "debug" mode):

  [debug]  ?- findall(cat,member(cat,List_of_animals),Cats).
  ERROR: Stack limit (1.0Gb) exceeded
  ERROR:   Stack sizes: local: 71Kb, global: 0.9Gb, trail: 3Kb
  ERROR:   Stack depth: 417, last-call: 0%, Choice points: 415
  ERROR:   Possible non-terminating recursion:
  ERROR:     [417] lists:member_(_230090210, cat, _230090214)
  ERROR:     [416] lists:member_([length:1|_230090242], cat, _230090236)
  ^  Exception: (4) setup_call_cleanup('$toplevel':notrace(call_repl_loop_hook(begin, 0)),   $toplevel':'$query_loop'(0), '$toplevel':notrace(call_repl_loop_hook(end, 0))) ? creep
Note:

  ERROR:   Possible non-terminating recursion:
  ERROR:     [417] lists:member_(_230090210, cat, _230090214)
_________________________

[1] I remember two infinite loops in production, one in the backend, one in the frontend. That was ca. 2013 so more than 10 years ago- good estimate!

The first loop was a missing terminating condition in a for-loop in C# that brought down the company's server along with every client's deployment (it was before everyone moved all their data to the cloud, you see). There was a meeting Upstairs™ and the programming team lead returned to tell us that he had explained what happened, explained that it was nobody's fault and that there's no way to prevent infinite loops like that happening again with perfect certainty, and that Upstairs had decided that, henceforth, iteration should no longer be used and when loops are required recursion should be used instead. Obviously that was completely ignored and everyone carried on as before.

The second loop was a bona-fide recursion without a terminating condition that happened in an in-house, Jango-like, templating language, called Mango. I don't remember the details but the folks who had coded the Mango interpreter evidently did a good job because it had no problem interpreting a recursive call in a template. The programming team lead from the previous story found it in a late-afternoon session where it was just me and him in the room. I felt a little deflated but I was just a junior starting out so I guess I was excused for missing it.


Take the infinite loop as just an example of an issue with depth-first search and backtracking. To be more general, I'd say that the issue is that the overall performance of a Prolog program can be very dependent on the ordering of its rules.

As an anecdote, a long time ago for a toy project switching two rules order got the runtime to finding all solutions from ~15mn to a around the second (long time, memory fuzzy...). The difference was going into a "wrong" path and wasting a lot of time evaluating failing possibilities, vs. taking the right path and getting to the solutions very quickly.

So in practice even if Prolog is declarative to get good results you need to understand how the search is done, and organize the rules so that this search is done in the most efficient way. The runtime search is a leaky abstraction in a way ;)

It's not an issue limited to Prolog, many solvers can be helped by steering the search in the "right" way. A declarative language for constraint problem like MiniZinc provides way to pass to the solver some indication on how to best search for example.

Also, most modern Prolog support tabling, which departs from strict DFS+backtracking and can help in some cases. But here too, to get the best results may require understanding how the engine will search, including tabling.


There are classes of infinite loops that are harder to spot for beginners, it takes a while to really understand the execution model.

Prolog variables can have two states at runtime: unbound or bound. A bound variable refers to some value, while an unbound variable is a "hole" to be filled in at a later time. It's common to pass an unbound variable into some call and expect the callee to bind it to a value. This can cause problems with infinite recursion where you intend to write a call that binds some variable, but the way you've structured your program, it will not actually bind it. So the callee ends up in the same state as the caller, makes a recursive call hoping its callee will bind the variable, and down the infinite recursion you go. With experience you can definitely spot this in code review. You'll also catch it in testing, if you test properly. But it's different enough from other languages that learners struggle with it at first.

Another source of (seeming) nontermination is when you ask Prolog's backtracking search to find an answer to some query in a search space that is too large and may not contain an answer at all, or only an answer that is impracticably far away. This is also sort of Prolog-specific since in other languages you rarely write the same kind of optimistic recursive search. This is harder to spot in code review since it's really application-specific what the search space looks like. But again, you test. And when in doubt, you direct and limit the search appropriately.


Infinite loops in Prolog can appear with very subtle changes in the use of code.

One of the core problems is related to the reversible nature of Prolog. Not only are some programs reversible and some are, practically speaking, not, there are many gradations on this.

The result is that programs that look equivalent and whose tests appear equivalent may exhibit non-termination in surprising ways. This is, in my experience, the rule rather than the exception with Prolog.


Prolog provides predicates to throw and catch exceptions and it's simple to test that a predicate is called in the right "mode" (i.e. what variables are bound on entry and on exit).

That detracts from the declarative aesthetics of the program code so it upsets purists but it is very useful to those who want to actually use the language to actually write actual programs and so can avoid lots of wailing and gnashing of teeth.

In other words, Prolog is like any other programing language: if you're careful, you will not hurt yourself. Also applies to chainsaws, bathtubs, and banana peals.


Yes.

It is trivially easy to create loops of rules when describing abstract properties.

Concrete properties tend to have "levels" to them, but many human concepts are self-referential.

In this way, its possible to spot that there may be an issue now or in the future, because the presence or lack of a loop depends on the specific choice of dependencies of a concept. However spotting the potential for a loop doesn't do a lot to help remove its potential existence, or show that it is there or not there.


How does the Datalog approach compare with RETE?


The big deal about Datalog is that it is equivalent to SQL-with-recursion. Thus, it can compile to database queries.


Are there any production ready open source databases using it?


DataScript, Datahike, Datalevin, and XTDB 1.x are open-source. (XTDB 2.x is also open-source but has switched from Datalog to its own query language and SQL.) DataScript, Datalevin, and XTDB have been used in production; not sure about Datahike. All of these databases come from the Clojure community and target Clojure as the primary language. The XTDB team has published a comparison matrix at https://clojurelog.github.io/.

Aside: I write a lot more Python than Clojure, and I wish someone ported Datalevin/Datahike/persistent DataScript to Python. I'd try it as an alternative to SQLite. I suspect with thoughtful API design, an embedded Datalog could feel organic in Python. It might be easier to prototype with than SQLite. There are Datalog and miniKanren implementations for Python, but they are not designed as an on-disk database. PyCozo might be the closest thing that exists. (A sibling comment https://news.ycombinator.com/item?id=40995652 already mentions Cozo.)


Not sure if "production ready" but it's worth looking at Cozo:

https://github.com/cozodb/cozo

Has a dialect of Datalog + some vector support. Multiple storage engines for backend including SQLite, so if your concern is data stability that seems like a reasonable, proven option.


Compiling Datalog to SQL with Logica is possibly the easiest path if you need a production ready open source Datalog setup (i.e. choose your favourite managed Postgres provider): https://logica.dev/


Datomic uses Datalog with a weird clojure syntax instead of the usual prolog-like syntax.


Not open source though?


No. It's only free as in beer. There's some weird mention about the Apache 2 license, but it only applies to the binaries, for some odd reason.


Hmm open source I'm not sure, there are many SQLite equivalents listed on wikipedia though, if that counts.


Shameless plug: you should check out my podcast The Search Space for a view of the broader landscape of Prolog and logic programming: https://thesearch.space/

I don't publish episodes often but I have a lot of good interviewees lined up :)

In general, I would advice you to look beyond Prolog and explore Answer Set Programming, the Picat language, and the connections between logic programming and databases (SQL, RDF or otherwise). Not instead of Prolog, but in parallel. Prolog is awesome!


I love your podcast! I wish you published episodes more often!

I particularly enjoyed the first episode, the conversation with Robert Kowalski.


Good to know there is further content lined up! I’m subscribed and eagerly waiting for it!


I'll second the plug: it's an excellent podcast


thanks for the thread for allowing to find you and you for making the interviews


ASP is in another uni course of mine ;). I'll check the podcast, thanks


I've played around with Prolog on and off for 7 years. Still a novice. It's one of those languages that forces your mind to grow in new directions.

It's difficult to make a case for it. The declarative paradigm is nice, but compared to other languages you're only saving a couple for-loops. I think its benefit comes from expressiveness for problems where clpfd can be applied. I once built an internal tool with Python and SWI Prolog that combined user input with CLPFD to configure test accounts in a consistent and useful way. Users could provide partial constraints, and the system would fill in the rest. Again, the ease of clpfd is great.

I've had some fun generating Prolog facts/databases with LLMs and it's something I want to explore more.

Note: I was just messing around with Prolog this week: https://hpincket.com/adding-an-easter-egg-to-our-numeronym-p...


I have been interested in Prolog since my time at the University, and I loved the idea of logic programming.

For "proper" Prolog, in 2024 it is a niche language alive in specific constraint solving applications, but not really used outside of that. I haven't seen anyone attempting at using prolog as a general purpose language since the 90'.

Datalog and logic-inspired languages tend to pop up here and there as domain-specific languages.

Rego is a recent incarnation which had good adoption for k8s and other "modern" systems. However, when trying to get people in my org to adopt it in practice, I saw engineers struggle with the paradigm when complexity grows to more than toy problems.


The most recent prolog news I've come across in recent years is some updates to SWIprolog (can't find a good link) and some talk of Scryer-prolog[0] which is a more recent implementation of Prolog in Rust.

One interesting development recently is a load of research into, reverse engineering of and emulation of the 1986 Sega AI Computer[1], which used prolog under the hood for mostly educational software. Unfortunately it does not seem there is a way to actually write some prolog for the thing today :(

[0] https://github.com/mthom/scryer-prolog

[1] https://www.smspower.org/SegaAI/Index


With compliments to your prof ;), interest in Prolog just now is recovering from a year-long focus on W3C's RDF/SPARQL. TBL surely had an itch to scratch with regards to logical knowledge representation dating back even longer than the web [1]. But Prolog has broader applicability not only in logical/knowledge graph querying, but also in solving all kinds of discrete combinatorical optimization problems. Or, as the Quantum Prolog site [2] puts it, "planning, optimization, diagnostics, and complex configuration." The site demos logistics optimization (in-browser demo) and reports initial optimization (parallelization) of Inductive Logic Programming and other ML tasks for partially auto-generating Prolog code from existing solutions.

Edit: ... and on performance vs SWI Prolog, too

[1]: https://en.wikipedia.org/wiki/ENQUIRE

[2]: https://quantumprolog.sgml.io


The problem w/ OWL is that everybody wants to work with first-order logic + math, but Gödel proved it isn't decidable.

For instance if I wanted to express financial regulations or business rules inside a bank or other business I'd need to use math: for instance to express the conditions for reserve requirements or approving a loan.

OWL is best thought of as a set of templates for generating first-order logic rules that are decidable and also (in theory) quick to evaluate with the Tableau algorithm.

In certain domains you might tolerate tools that are imperfect, like it isn't fair to expect a SMT solver to figure out this one

   x^N + y^N = z^N
where x,y,z and N are all positive integers with N>2. For that one it would try to find solutions and probably time out. For some similar problems (a different polynomial) it might give you an answer.

OWL doesn't want to go there which is a big reason people say "Nein Danke!"


> Gödel proved it isn't decidable.

He did no such thing. He proved undecidable problems exist in any system powerful enough to be useful. That doesn’t make those systems useless, though.


The trouble is the creators of OWL wanted to have performance and reliability bounds. That is, they want to make systems that act like more like a conventional database server than an SMT solver.

I think they could have made a more expressive standard and something like that might have had more appeal to people but been less consistent in terms of performance.


My honest opinion is to avoid Prolog for most enterprise needs in favor of a regular general purpose programming language that calls out to a mathematical or constraint solver via API when the need arises. This way you get a language that is easier to learn with a strong ecosystem of libraries along with a solver that is built for your particular problem.

Prolog may excel in some niche cases that are documented out there which is fine. For the majority of cases I can think of...it is too esoteric.

Prolog is SUPER cool though as is it's history. You should definitely play with it a bit.


Prolog itself is still developed and used in various settings (mostly swi-prolog?), but other languages and logic engines solve domain specific but similar problems better (rule engines, formal proof verifiers, etc). For exploratory work it can be useful.

I have tried to use it in combination will LLMs unsuccessfully, partly because the domain was not specific enough. Otherwise you need a lot of real world knowledge and a large fact database.

Logic engines for first order logic in RDF/OWL also have interesting logical inference abilities, like graphdbs.

Any programming language can do "logic" and the work at MIT/CSAIL in probabilistic programming may turn out to be a better way to combine fuzzy logic and formal proofs.

Not sure this answers your question, but maybe this points towards some interesting directions.


Any answer here is a good one since the question is soooo unspecific :D. My professor is a staunch advocate for RDF/OWL, inference engines and stuff like that (hence why i also mentioned ontologies :D).

The thing is that i think that the language itself has so much untapped potential and the world that i dived into with my studies is so vast, so full of stuff that it left me kind of dazed to be fair!

I got some papers in regards to knowledge representation (that to be fair i still have to read... exams and work got in the midst of all :/) but still it seems so... odd: when we were studying OOP in my bachelor we went over the usual examples that made you understand "this is not an imperative paradigm but there are object abstractions" while, in my studies, prolog and logic programming in general was seen as a tool of sorts for reaching an objective like "hey we have a MAS system, let's sprinkle some prolog in it for fun :D" (maybe i am exaggerating but it feels like this lol). I feel it can do much much more


You are definitely on to something here. OOP has some common roots with formal ontologies and knowledge representation (not so much the programming languages, but object oriented modeling). OO fails at this for various reasons, whereas logic is tailored for this specific purpose. Check out ErgoAI (formerly Flora-2), it's the most advanced Prolog flavor for representing and reasoning over knowledge. https://github.com/ErgoAI


you guys are giving me so much to read thanks <3 i'll give this a check when i have some time out of exams/work. I will surely check ErgoAI


If you want to see something truly fascinating, take apart https://logtalk.org/ - it implements an OO system for prolog which gives you all sorts of advantages (the least of which being a not-terrible way of getting namespaces).

Reading "The Art of Prolog" and "The Craft of Prolog" was fun for me, as was learning how the Warren Abstract Machine works.

(I am not at all a prolog expert, merely a programmer who happens to be fascinated by it, so this is all dabbling on my end but hopefully provides some stuff that's fun to learn for you as well)


Hi! Are you Jim Hendler (or related to him), my Reagan-era AI professor from UMD?

https://en.wikipedia.org/wiki/James_Hendler

My Prolog programming assignment #4, a Prolog "nehcihsahA" detector (maternal uncle: a mother's brother, or any equivalent relative) seemed designed to make me hate Prolog with a passion, involving bending over backwards by defining ridiculous predicates like siblish, sibloid, relatoid, sistoid, brothoid, mothoid, and fathoid.

https://www.donhopkins.com/home/code/nehcihsaha.prolog.txt

I much more enjoyed the OPS-5 programming assignment #6, for which I made a worm simulation that hacked into Ollie North's Intimus-007s ("the ace of security paper shredders") in the White House basement, via Professor Hendler's Sun workstation dormouse, rms's account with password rms on prep, and Casper Weinberger's account on UMD's Vax 11/780 mimsy and NSA's PDP-11/70 tycho, connected via the NSA's MILNET IMP 57 at Fort Mead, then posted Ollie North's secret diary and notes it found in the paper shredder to talk.rumors via the UCB-Vax usenet gateway.

https://www.donhopkins.com/home/code/crack-ollie.ops5.txt

https://news.ycombinator.com/item?id=18376750

>At the University of Maryland, our network access was through the NSA's "secret" MILNET IMP 57 at Fort Mead. It was pretty obvious that UMD got their network access via NSA, because mimsy.umd.edu had a similar "*.57" IP address as dockmaster, tycho and coins. [...]


> My Prolog programming assignment #4, a Prolog "nehcihsahA" detector (maternal uncle: a mother's brother, or any equivalent relative) seemed designed to make me hate Prolog with a passion, involving bending over backwards by defining ridiculous predicates

Your attempt at a solution definitely defines ridiculous predicates, but you should not blame that on your teacher or the language. For example, there is no way that defining "a mother's brother" would need to refer to a "same sex" predicate in any way. You took a wrong turn somewhere with your approach, but again it's neither the language nor your teacher that forced you down that path.


No. Good catch though! I am not Jim, though I have talked with him!


If you are struggling to get Prolog.

Think about it this way. In a regular programming language you write code and then write unit test cases to validate it.

In prolog, all you do is write the test cases and then its up to the compiler to write and run the code for you. In other words you define a set of cases for which a logic is supposed to hold true. The compiler then decides what the code must look like if that is the case.

This might look easy for simple True/False kind of cases. But when you have to write test cases for functions that return deeply nested data structures and all their variations. Then it becomes easier said than done. The other part that makes Prolog hard to get, is you are only allowed recursion to iterate or define things. All of this makes it a little hard to think, write and trouble shoot Prolog.

If you are reading Prolog code then try to think of it like you have access to a code repo's unit test cases, but the actual code doesn't exist. The test cases are considered sufficient enough to define the code in a concrete way.

As you might have started to notice by now. This is actually harder than writing the code itself. In a regular programming language, you get sufficient space to write a function that might not do 100% of what was intended(bugs). Im prolog such an adventure will produce absolutely something else altogether.


I've brushed up against it in the form of datalog as the query language for databases like datomic and xtdb, so it's soul is alive and well!

I'm also considering a prolog like domain specific language to make a state syncing engine with pure declarations of how the state in system A is reflected in System B, etc.

Prolog itself may not be mainstream, but it is an answer to a the universal problem space of constraint solution, so comp sci will always be in its long shadow.


I've actually been thinking about this quite a bit. I remember a foray into prolog when I was a younger pup in 2004-6. With the Advent of llms, I think that perhaps we could use llms to extract triples from large corpuses of text and then use that to build our prolog stores or ontologies and work on them. I haven't really experimented much with it but you saying this has reminded me that I should dig that back up again.


No idea, but it might be worth looking into Mercury and {mini,micro}Kanren/core.logic as more practical iterations on it (either by adding things to Prolog or extracting the interesting to stuff to use in more general purpose languages).


At the end of the day "practical" means library support and community knowledge, by which measure Prolog and more specifically SWI and Sicstus are far more practical than any of the other logic languages or implementation options


Well if your problem does not require a solution that’s 100% written in prolog, then any relational/CLP system that can be hosted or work as a library is going to win in terms of library support and community knowledge, at solution level.

So e.g. a core.logic solution can make extensive use of the jvm ecosystem.


You may find this paper interesting: https://www.cambridge.org/core/journals/theory-and-practice-...

title: Fifty Years of Prolog and Beyond (2022)


Here's what I'm doing with Prolog:

https://github.com/stassa/louise

Louise is a Meta-Interpretive Learning (MIL) system. MIL is like a second-order Prolog where first-order programs are learned from higher-order programs by Resolution. There's a long thread of literature on MIL going back to 2014 but it now seems we're starting to move towards applications, e.g. I'm doing a post-doc where I use MIL to learn autonomous behaviours for an agent that must guide a mobile robot in survey missions. Other colleagues are working on applications in biology. We're going slowly because there's very few of us but MIL is a powerful technique that extends the soundness and completeness of SLD-Resolution to induction, so I'm hopeful that good things will happen with a bit of elbow grease and a bit of patience.


There are a few magical algorithms/systems which give you superpowers if you can find the right application for them. At least in the pre-LLM era, they were some of the magical tools we had, for just solving declaratively specified difficult problems without us explicitly writing code, while (unlike certain AI techniques which shall remain nameless) providing correctness guarantees and often being deterministic and stable.

Prolog and logic programming is one, together with its relative, constraint logic programming, and its relative mixed integer programming, which in turn is part of the broader linear and convex programming family.

What else should we put in that category?


The problem with Prolog is that it's based on unification, and small unification engines can be expressed in a few lines in any functional programming language.

That narrows down the already small niche where one would choose Prolog by probably a few orders.


Is this in the same sense that "one could write lisp in 99 lines of c"?

In my opinion, this does not imply that proper lisp (and correspondingly prolog) implementations are useless, just because a simple implementation can be written in a different, "more expressive" language.


There is a very practical embedable logic-programming engine called miniKanren for many programming languages that can be used to add the logic-programming techniques of Prolog to other languages.

https://en.wikipedia.org/wiki/MiniKanren

There's a great book in the same series as the "Little Lisper"/"Little Schemer" books called "The Reasoned Schemer" that uses MiniKanren with Scheme.


No, not really. A lisp in 99 lines of C would barely be useful. In contrast, Prolog mostly shines where you need reasoning/unification over a database of facts -happens pretty often,- but that's just too easily expressed in any proper functional language. And with a bit more pain in an imperative/OO language.


What makes Prolog, Prolog is not unification on its own but SLD-Resolution with unification, where "SLD" stands for [L]inear Resolution with a [S]election rule restricted to [D]efinite clauses. If you know your Resolution typology, that means soundness and refutation-completeness (or completeness with subsumption) [1].

I don't think I've ever seen a discussion of Resolution in functional programming textbook implementations of "Prolog". They typically just bodge some depth-first search with backtracking and unification in polish notation and call it "Prolog", or "logic programming". A bit like if I wrote a "lisp" with eval(X):- call(X), then completely ignored all that jazz about lambda calculus and concentrated on garbage collection and linked lists.

A good starting point instead, if one wishes to understand Prolog and not just dismiss it out of hand, is to try and understand Resolution, and what unification does for Resolution, and why it ended up in Prolog (and how) in the first place. I'd start, well, at the beginning:

A Machine-Oriented Logic Based on the Resolution Principle

https://www.semanticscholar.org/paper/A-Machine-Oriented-Log...

Where you can follow the progress from ground Resolution, to unification, the Resolution theorem, and all the way to the (pre-modern) Subsumption Theorem. It's a long way to Prolog from there, but that's where it all begins.

_______________

[1] Although Prolog implemented by DFS is not complete if Prolog implemented by DFS is not complete (etc).


For some, how a language is implemented seems to be the paramount thing.

For many, how the language faces the user, how its paradigms fit the problems at hand and the user's mode of seeing the world, that is more important.

These days, with the terabytes the petaflops and the megajoules, it might be even less relevant how the gears are turning inside the black box.


> expressed in a few lines in any functional programming language

I don't think that performs like a proper Prolog engine on larger problem.

Real Prologs work by compiling to something called the WAM (Warren Abstract Machine).


I've been using Prolog daily for the past 1.5 years. I've also implemented and used a Kanren in Elm, and there is simply a world of practical difference.


There is still academic work on Prolog, and more broadly deductive / logic programming. If you are looking at things with a more industrial bent, I would look to Datalog which trades generality in Prolog for performance and predictability. Alternatively, you can go the other way and look at lambdaProlog which adds real abstractions / HOFs to Prolog.

What I've seen in practice is that while Prolog may be good at describing a solution, its performance is often too lackluster and brittle for actual deployment: it probably fits more as a prototyping language before you do a classic implementation of the solution in a more traditional language.


There are certain (academic) problems for which Prolog is simply the best tool for the job, see e.g., https://github.com/hbrouwer/dfs-tools


> (academic)

Ah, for a second I thought someone just found a way to make Prolog useful for something. What a terrifying thought indeed, luckily the crisis has been averted, the natural order is restored and all is well.


Tangent to Prolog, perhaps check Flix, which includes logic programming features [1], and is discussed here from time to time [2].

--

1: https://doc.flix.dev/fixpoints.html

2: https://news.ycombinator.com/item?id=25513397

2: https://news.ycombinator.com/item?id=31448889

2: https://news.ycombinator.com/item?id=38419263


I like Peter Norvig's book "Paradigms of AI Programming," where you learn old fashioned symbolic AI with LISP and Prolog. Is it outdated? Absolutely, but it is a classic read.

Maybe a use case for new AI models could be creating more old fashioned expert systems written in LISP or Prolog that are easier for humans to audit. Everything tends to come back full circle.

https://www.amazon.com/Paradigms-Artificial-Intelligence-Pro...


It's a powerhouse, an even bigger secret than Lisp at beating the average.


I chose to use prolog to essentially build an expert system across and heterogeneous data ecosystem.

Prolog could certainly use some serious improvements to its tooling. But the language is simple enough that it doesn't prove too much of an issue. You can get some much out of language it can be very powerful. In the system we've built it makes up a purely logical core that is completely referentially transparent, we leave all the ecky side effecting to a host program.


I'm fairly familiar with Prolog, from its operations & implementation, having also thaught it to students.

Something I never really grokked however is: when to reach for it?

It seems like a powerful tool for a certain class of problems, but somehow I never seem to stumble upon them.

An obvious candidate is problems that map clearly to solvers that can limp along with sub-state-of-the-art performance. Solving sudokus is the classical classroom example. But somehow again, never really ran into something that maps to that (I have worked mostly in compilers and distributed system, with a smattering of application programming (frontend/backend)).

Any ideas or anecdotes?


Wish there was more discussion of Datalog here. That’s come up in a few interesting places and I’d love to hear about folk’s experience with it.

https://en.wikipedia.org/wiki/Datalog


Not quite what you asked for, but as someone using it quite a lot back in the late '80s during a CS&AI degree, Prolog has its interesting features and I'm glad I used it, but I haven't missed it since. I do like declarative stuff, eg CSS!, and that remains a good memory.


Speaking of CSS, :has() [https://developer.mozilla.org/en-US/docs/Web/CSS/:has] brings it closer to the glory of Prolog. I cannot wait to abuse it in avant-garde Logic-Driven-Development.


could you please expand on it? i would love to read more


Many many languages that you will encounter and use in live projects are primarily imperative, eg: C, C++, JavaScript. The describe the "how" and "in what order".

While I was an undergrad I was exposed to Standard ML and Prolog, both of which were/are much more about declarative "what", though they could only practically interact with the actual world by side-effects and some imposed ordering (SML's 'ref', Prolog's cut).

I am still waiting for some of the amazing stuff that was in SML to materialise in C++ and Java for example, less so anything from Prolog. For example, to search a state space I might use an off-the-shelf solver with good heuristics and an objective function written in something imperative rather than use Prolog.

But it it really is over 30Y since I touched Prolog, so life in it may be very different now.


Pretty interesting stuff, thanks for sharing :D


I'm not the commenter you're replying to, but in my case, I really enjoy the promise of "you write down the problem, not the solution".

In an introductory prolog course you will soon find that when a prolog program is written to solve some problem like 'whats-the-next-chess-move' it's actually doing a depth first (and if you use the ! cut-operator, it will stop looking for any more solutions).

But in principle, it's up to the interpreter/compiler to decide how to find solutions. In the same way that a C compiler might say "ah, you're doing tail-recursion, let me make a loop out of that", a prolog compiler might say "gee, this problem looks like it would be much more efficient to use simulated annealing to find some answers in a shorter time". That's perhaps a bit far-fetched, but a great example is Datalog which has solvers that parallelize the search. You don't write a parallel algorithm, it's just that a parallel algorithm is used to solve your problem.

A specific feature I miss in other programming approaches is that if you can find the answer to the question "is A a child of B?", the very same code is also the function to find out all of A's children, or all of B's parents. No need to explicitly code a loop, or to create the inverse function.


Eclipse CLP still seems slightly active: https://eclipseclp.org/. I used it for some process scheduling research in the early 2000s but I've never had the chance to apply it in the non-academic world


I know almost nothing about prolog, but I enjoyed this tutorial using Datalog, a subset of prolog as an alternative data query language to SQL: https://www.learndatalogtoday.org/


The CLP (constraint logic programming) systems available in some Prologs take it to the next level: https://us.swi-prolog.org/pldoc/man?section=clp


I would say in the open source world, SWI Prolog is still the king implementation, in regards to tooling, language features beyond ISO Prolog, and toolchains.

https://www.swi-prolog.org/


i've been aware of that for a while, it seems to be the state of the art at least in my university (to the point that to this day the researchers are trying to convert old prolog projects to this implementation)


Gerrit is one of the fewest major OSS I’m aware of that uses it: https://gerrit-review.googlesource.com/Documentation/prolog-...


If you are interested in small fun stuff, SWI-Prolog has network libraries. Just recently, I implemented a network gomoku (5-in-a-row) game in it for my school project: https://gitlab.mff.cuni.cz/volfmat1/prolog-network-gomoku. Turns out you can also write quite imperative-style code with it :D


Formal verification uses Prolog a lot. System on TPTP at U of Miami utilizes this for many of the formally verified tests on there. It is just a more intense discipline than general programming which is why I’m perpetually drawn to it trying to find more real world applications. It is not exactly Prolog but close enough to mention the similarities.


Don't forget the Datalog subset!

In the 2000s I was interested in inference over RDF and wanted something a bit more than RDFS and OWL and found out about Datalog:

https://en.wikipedia.org/wiki/Datalog

There wasn't a lot of literature on it or implementations then but a few years later people realized it's a great query language for complex queries that does a great job on transitive closures, can do math (unlike OWL which won't do it because Gödel proved first order logic + math is a hot mess)

I took a comparative programming languages course circa 1993, the instructor thought that that Prolog was a taste of the future of programming. At first I thought the way you can implement ordinary procedural code in Prolog was really clever but if you write very much of it I think it is awkward; for instance it is common to treat procedural success as a logical failure because that gets the behavior you want.

It's counterintuitive that you could write a reasonably fast interpreter for Prolog but Warren figured out how to do it and it really is a neat trick. In the 1980s the Japanese Fifth Generation project dreamed about parallel Prolog on a machine with 100s of CPUs but it was discovered pretty quickly that you couldn't really parallelize Prolog execution so they came up with the less expressive language

https://en.wikipedia.org/wiki/KL1

I am amused to see papers today where people are working on tasks similar to what they worked on in that project, parallelizing them with commodity hardware, and get scaling curves that look very similar to what was done with KL1. (In the end the 5GP settled on the same message-passing architecture that everybody else did until the GPU revolution came)

One of the nicest examples in Prolog is writing a parser by just writing the productions which works because Prolog's resolver is quite similar to a common parsing algorithm. In the large however, you can add a library to a normal programming language like Python or Java where you write the same grammar in a DSL and it is handled by the library.

See also production rules systems which use "forward chaining" with the RETE algorithm and variants for an approach which looks like Prolog in some ways but works in the reverse direction. My favorite example of this now is

http://www.clara-rules.org/

I built a prototype of a stream processing engine where the control plane was implemented as a set of production rules that would build a processing pipeline of reactive operators, key-value and triple stores and then tear it down. Unlike another stream processing engine I worked on, mine always got the right answers. I think a production rule system could be the target of a "low code" system. I'm a little disappointed that I've never seen a Javascript framework that uses production rules because they are a great answer to asynchronous communication choreography. (See complex event processing)


There was a massive parallel implementation of Prolog running on literally 256 processors: BA-Prolog (http://fraber.de/bap/). Unfortunately the hardware platform was abandoned some years later by Inmos.


My professor swapped Prolog out for Rust at the last minute. I don't know whether he did us a disservice or a favor.


That is my point, i think that prolog isn't just a simple tool to solve stuff, i think that it's potential can still be explored (even if, to be fair, Rust can run on a functional paradigm setting)


What a curious swap. May I ask which course he taught?


It was a whirlwind "survey of languages" course. After blowing our minds with functional programming via OCaml, the last segment was traditionally logical programming via Prolog. But he decided to spare us, I guess, and made me fall in love with Rust for a few years. :p (Or he sadistically meant to inflict the trauma of knowing how much better C and C++ could be but never will be, which stays with you even after you stop using Rust and return to those.)


I’d love to hear more about your trauma :-). What are the main ways C and C++ could be better?


The easiest answer: they should have a build and test and packaging system like Cargo built-in. The best you have is a tedious anarchy of third-party systems, of which CMake seems to have come out the winner. I use CMake, but it's no Cargo (as an understatement).

Borrowing and ownership is great for building safe code, whether it's single- or multi-threaded. In C, you have a nest of pointers, and a pointer communicates nothing about the ownership of its memory. Stuff can go, and leak, and dangle, everywhere. C++ mitigated this a little with references. Then it bolted on smart pointers, mitigating it further. Without learning Rust, though, I don't think I'd be as good of a C/C++ programmer because, unlike Rust, those languages don't force you to think about where memory lives and where it's going.

The module system is also way better.

Then I could take cheap shots: `u8` is way better than `uint8_t`. Rust had slices from the beginning, to be able to refer to parts of strings in a cheap way with easy syntax, whereas only C++17 introduced the comparatively awkward `std::string_view`.

There are ways I think Rust is worse though! The easiest is that it still doesn't have a standard and will probably not see the uniquely deep and broad support of C/C++ in my lifetime. :p


There are more criticisms of rust and c++ but sure.


To piggyback onto OP to ask about something very loosely related: what about miniKanren? Are there any active projects or work being done here, either in academia or industry? Most of the ones listed on minikanren.org appear to be dead -- although I haven't gone through them all since last year


Prolog is very, very dead. I love Prolog with all my heart, but it excells at problems that are solved today much more efficiently using neuronal networks. So it's utterly obsolete.

The issue of Prolog is that you need to code your rules manually. Doing ML with Prolog is possible, but very clumsy. Better stick to Python.

Speed is irrelevant, because most problems suitable for Prolog are exponential. Implementation is irrelevant, because SWI-Prolog does all you need with good integrations, except that it's a bit slower. But that's irrelevant, see above.

Learning Prolog is a great experience for any advanced computer science student. It amazes, doesn't it?


Prolog was never good at the things they thought it would be at, like AI, which is better done by ML today, specifically often like you said, with NNs. But it turned out to be good for other things, and those use cases are still alive today, even though there are many competitors. Look at Tiobe index, Prolog's usage is constant just under 1 percent, and has been for decades. So it's good for something.


Prolog was never designed for function approximation, like Neural Nets, so there is no comparison. Machine Learning with Prolog is perfectly possible and not at all clumsy. In fact these days we can even say it is done elegantly, by raising everything to the second order of logic where deduction and induction become one and the same.

Let me know if you need links and refs, but please try to keep your knowledge up-to-date before making big, splashy statements like "Prolog is very, very dead".


Logic programming is overrated, at least for logic puzzles (2013)

https://news.ycombinator.com/item?id=36154011


And the response for the curious: Logic programming is underrated (also 2013): https://news.ycombinator.com/item?id=5846185


With ChatGPT it is a great time to learn new programming languages.

Questions such as "give me table with a glossary of basic Prolog terminology with examples" as well as others can be helpful.


I think of Prolog as a general purpose logic programming language and Datalog to be logic programming more focused on data analysis. Data analysis is a very large area, so boundary might get blurry at times.

If your data is in a relational database consider Logica - a Datalog family language that compiles to SQL and runs naturally on SQLite, Postgres, DuckDB and Google BigQuery.

Easy to install, easy to play with in CoLab or any other Jupyter notebook.

Works for data analysis (aggregation, filtering etc) that is commonly associated with SQL, as well as recursive logical querries commonly associalted with Logic programming per-se.

Here is what it looks like for a data-analysis-ish query of finding popular baby names over time:

# Count babies per year.

NameCountByYear(name:, year:) += number :- BabyNames(name:, year:, number:);

# For each year pick the most popular.

TopNameByYear(year) ArgMax= name -> NameCountByYear(name:, year:);

# Accumulate most popular name into a table, dropping the year.

PopularName(name: TopNameByYear());

The classic grand-parent rule looks as usual:

Grandparent(a, c) :- Parent(a, b), Parent(b, c);

Here is a recursive program for finidng distances in a directed graph:

D(a, b) Min= 1 :- Edge(a, b);

D(a, b) Min= D(a, x) + D(x, b);

Links to CoLabs:

Grandparent, ancestor: https://colab.research.google.com/drive/1lujnnUOXsF6VrC9__jV...

Distance in graph:

https://colab.research.google.com/drive/1sOCODHqN0ruxZSx_L-V...

Github repo: https://github.com/EvgSkv/logica


I recommend that you check Souffle programming language out, here: https://souffle-lang.github.io/index.html

* It is a dialect of Prolog

* It reads from and writes to SQLite database format as well as CSV. This allows you to preprocess or postprocess data Souffle produces, or Souffle reasons. E.g. you can generate bunch of data in Python, output Sqlite, reason in Souffle. Load reasoning output back into python via sqlite.

* It is pretty feature complete when it comes to logical reasoning, and transactional database management. You get best of both Prolog and Sqlite worlds.


Not exactly a prolog, but Verse, a logical (and functional, or functional logic) programming language developed at Epic Games by Simon Peyton Jones of Haskell fame and Tim Sweeney. You can already use it to build mods for fortnite or something like that not really sure. But there's no open source compiler available yet.


You might be interested in reading about the Japanese "Fifth Generation Computer Systems" project from 1982, which revolved around PROLOG.

https://en.wikipedia.org/wiki/Fifth_Generation_Computer_Syst...

>The Fifth Generation Computer Systems (FGCS; Japanese: 第五世代コンピュータ, romanized: daigosedai konpyūta) was a 10-year initiative begun in 1982 by Japan's Ministry of International Trade and Industry (MITI) to create computers using massively parallel computing and logic programming. It aimed to create an "epoch-making computer" with supercomputer-like performance and to provide a platform for future developments in artificial intelligence. FGCS was ahead of its time, and its excessive ambitions led to commercial failure. However, on a theoretical level, the project spurred the development of concurrent logic programming.

>The term "fifth generation" was intended to convey the system as being advanced. In the history of computing hardware, there were four "generations" of computers. Computers using vacuum tubes were called the first generation; transistors and diodes, the second; integrated circuits, the third; and those using microprocessors, the fourth. Whereas previous computer generations had focused on increasing the number of logic elements in a single CPU, the fifth generation, it was widely believed at the time, would instead turn to massive numbers of CPUs to gain performance.

[...]

>Concurrent logic programming

>In 1982, during a visit to the ICOT, Ehud Shapiro invented Concurrent Prolog, a novel programming language that integrated logic programming and concurrent programming. Concurrent Prolog is a process oriented language, which embodies dataflow synchronization and guarded-command indeterminacy as its basic control mechanisms. Shapiro described the language in a Report marked as ICOT Technical Report 003,[7] which presented a Concurrent Prolog interpreter written in Prolog. Shapiro's work on Concurrent Prolog inspired a change in the direction of the FGCS from focusing on parallel implementation of Prolog to the focus on concurrent logic programming as the software foundation for the project.[3] It also inspired the concurrent logic programming language Guarded Horn Clauses (GHC) by Ueda, which was the basis of KL1, the programming language that was finally designed and implemented by the FGCS project as its core programming language.

>The FGCS project and its findings contributed greatly to the development of the concurrent logic programming field. The project produced a new generation of promising Japanese researchers.

https://www.sjsu.edu/faculty/watkins/5thgen.htm

>The Japanese Fifth Generation project was a collaborative effort of the Japanese computer industry coordinated by the Japanese Government that intended not only to update the hardware technology of computers but alleviate the problems of programming by creating AI operating systems that would ferret out what the user wanted and then do it. The Project chose to use PROLOG as the computer language for the AI programming instead of the LISP-based programming of the American AI researchers.

The Japanese National Fifth Generation Project: Introduction, survey, and evaluation:

https://stacks.stanford.edu/file/druid:kv359wz9060/kv359wz90...

>Abstract:

Projecting a great vision of intelligent systems in the service of the economy and society, the Japanese government in 1982 launched the national Fifth Generation Computer Systems (FGCS) project. The project was carried out by a central research institute, ICOT, with personnel from its member-owners, the Japanese computer manufacturers (JCMs) and other electronics industry firms. The project was planned for ten years, but continues through year eleven and beyond. ICOT chose to focus its efforts on language issues and programming methods for logic programming, supported by special hardware. Sequential 'inference machines' (PSI) and parallel 'inference machines' (PIM) were built. Performances of the hardware-software hybrid was measured in the range planned (150 million logical inferences per second). An excellent system for logic programming on parallel machines was constructed (XLI). However, applicationswere done in demonstration form only (not deployed). The lack of a stream of applications that computer customers found effective and the sole use of a language outside the mainstream, Prolog, led to disenchantment among the JCMs.

Japan's Fifth Generation Computer Systems: Success or Failure?

https://www.reddit.com/r/prolog/comments/owb0xg/japans_fifth...

https://instadeq.com/blog/posts/japans-fifth-generation-comp...

>This post is a summary of content from papers covering the topic, it's mostly quotes from the papers from 1983, 1993 and 1997 with some edition, references to the present and future depend on the paper but should be easy to deduce. See the Sources section at the end.

[...]

>Prolog vs LISP

>Achieving such revolutionary goals would seem to require revolutionary techniques. Conventional programming languages, particularly those common in the late 1970s and early 1980s offered little leverage.

>The requirements clearly suggested the use of a rich, symbolic programming language capable of supporting a broad spectrum of programming styles.

>Two candidates existed: LISP which was the mainstream language of the US Artificial Intelligence community and Prolog which had a dedicated following in Europe.

>LISP had been used extensively as a systems programming language and had a tradition of carrying with it a featureful programming environment; it also had already become a large and somewhat messy system. Prolog, in contrast, was small and clean, but lacked any experience as an implementation language for operating systems or programming environments. [...]

>Fun Trivia

>The one commercial use we saw of the PSI machines was at Japan Air Lines, where the PSI-II machines were employed; ironically, they were remicrocoded as Lisp Machines.


See also here for an actively maintained and relatively portable implementation of Flat GHC, Strand and PCN for UNIX systems:

http://www.call-with-current-continuation.org/fleng/fleng.ht...



It's an interesting but fundamentally flawed idea. My suggestion would be to play with it and have fun but don't bet the house on it.

If you're curious what the flaw is, think Empiricism vs Rationalism.


There is also miniKanren (e.g. Clojure core.logic is a. implementation). miniKanren is more generic than Prolog.


MiniKanren, being purely relational, is a subset of Prolog.


As someone who had the exceedingly rare opportunity to experience professional context Prolog.

Lol.

Lmao. Even.


cut.


yes.


[flagged]


I'm reporting this to dang via email. This uncivility has no place on this sub.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: