Hacker News new | comments | show | ask | jobs | submit login
Prolog Under the Hood: An Honest Look (1992) (amzi.com)
60 points by networked 16 days ago | hide | past | web | favorite | 57 comments



>> Although Prolog's original intent was to allow programmers to specify programs in a syntax close to logic, that is not how Prolog works. In fact, a conceptual understanding of logic is not that useful for understanding Prolog.

Look, I get what the article is saying, but the fact of the matter is that Prolog has to run on real-world computers and those are not purely logical. They have working memories and storage, they have processors that take in instructions one-by-one and so on. It's all nice and well to say that you want a purely declarative language but, well, we just don't have the hardware for that sort of thing, at least not yet anyway (and not in the foreseeable future, given that nobody is really working on that- not anymore, not after the Fifth Generation Computer programme tanked).

So compromises have to be made. They have to be made at every step of the way. Compromises like the selection rule for resolvents (this is what makes the order of clauses in your program matter, when it doesn't in pure logic). Compromises like having extra-logical predicates, like write/1 and get/1 (the latter gets the next byte from the current input- what is a "byte" in logical terms? It isn't). Compromises like the cut, !/0 and even that stupid fail/0, that give you some control over the interpreter's program flow, which in and of itself is something completely outside of logic.

But, this is how you get a language that is 90% of the way to the purely declartive, logic programming ideal. You gotta be, whatsitcalled, pragmatic, innit.

I also don't agree that teaching the operational semantics of the language (the four-port model) is the best way to reveal the beauty of the language. I've been coding in Prolog since around 2008, and I was very comfortable with that model, but my understanding is still incomplete. I've recently read "Foundations of Inductive Logic Programming" that has a big intro to logic and logic programming with endless proofs about subsumption and the soundness of resolution and so on, and it's like, I was blind in one eye and could only see half the world. You gotta have a very solid grasp of both the theory and the practice to code well in Prolog.

... and there's your problem, right there. The investment you have to make to really master the language is huge and very few people have the resources for it.


> The investment you have to make to really master the language is huge

What's the payoff?


The payoff is that you master a skill that is hard to master, but I suspect you're asking more about what are the rewards in practical terms.

For me, the extra effort I put into learning Prolog has actually paid off in career advacement. First, because several potential employers noticed that I listed Prolog in my CV as one of the programming languages I know (a CV is where one is required to brag about what they know, right?) and definitely considered it a factor in their decision to at least interview me, or hire me (I know because they told me). I could of course just list Prolog in my CV without knowing any, but I could have been found out, and I'm a wuss so I'd never do that.

Second, because my interest in Prolog led to an interest in AI, because a majority of Prolog textbooks are also (Good, Old-Fashioned) AI textbooks and reading them to learn Prolog led me to learn about AI, and eventually take a Master's course in, well, actually, machine learning and data science, right around the time when it was exploding in popularity.

And third, because my interest in Prolog combined with my Master's in machine learning led me to apply for a PhD at one of the highest-rated technology research universities in the UK (and in the world- Imperial College), to study Inductive Logic Programming, which is basically machine-learning Prolog programs from Prolog programs, under the supervision of the guy who, literally, wrote the book.

Prolog has taken me places- places I'd never have been without it. Lucrative places, and prestigious places. Anyway you look at it, for me at least, it was a worthy investment in time and effort.


Would you please list some of your most-recommended Prolog books? Old-fashioned Prolog AI books sound like a joy to read.

Thanks. Do you know of anyone using modern ML techniques to propose smarter search strategies?


Sorry, I haven't heard of anyone doing something like that. What do you have in mind?


Something like reinforcement learning on neural-net parameters, to quickly find solutions to a set of constraints.


Hey, sorry, I missed this. No, I don't know about anything like that, but constraint solving are not my cup of tea and I don't know the biblio. There may be something like that around. I'd start by looking for papers on constraint solving in IJCAI or AAAI where you're likely to find some, then follow their references. You might unearth something like that.


No worries. Thanks for the response.

(The last time I worked on a Prolog program was 20 years ago. It had only around 4000 lines, so take the following with a grain of salt.)

For me, a work day in Prolog looked like this: Write your idea down in 1 hour and then spend the remaining 7 hours figuring out where to place the cuts :)


Sometimes I think my Python code has 7 lines of validation or plumbing for each line of algorithm.


not far from the truth, but I guess it comes down to where to you place the technical debt (up front with statically typed langues), or after you have something working (dynamic languages). I favor the later because of the low cost of experimentation, but you do need get around to hardening the system, and that part has real costs.


While i agree with the technical debt here, I think you slightly misplace it. You take debt by using the libraries available on python. Same as you take loans from others that have capital. The currency here is their knowledge.

That is, the dynamic/static divide is a distraction. Easy to talk towards, but likely nowhere near as important as other factors.


It has nothing to do with technical debt or typing but that input data could violate assumptions of the algorithm.

Duck typing and Python large ecosystem does tend to exacerbate that though


> After a brief exposure, however, many programmers fail to appreciate its full breadth and never use it again.

Yeah. That was me. A friend of mine tried to clue me in to Prolog about twenty years ago and I didn't "get it".

Recently I revisited it and realized that about 1/3 of my working career was wasted by not using Prolog.

My strong suggestion to any programmers reading this: learn and use Prolog.


>> Yeah. That was me. A friend of mine tried to clue me in to Prolog about twenty years ago and I didn't "get it".

If it helps, I remember a senior programmer of mine once saying that out of 35 people in his Master's class, nobody really got it.

It's the way that Prolog is taught at uni. I think, because there is so much material to cover and it's very hard to know where to start, most courses end up teaching whatever the tutors think are the most important bits. Most end up teaching it as any other language, starting from the syntax and going to examples of "how to do such-and-such in Prolog" - like, how to loop over a list etc.

That is how I learned Prolog myself and it took me, I don't know, ten years? Before I could really get about half of what was going on. I think a much better approach would be to start with the theory of logic programming- spent two or three courses covering propositional and first order logic, then Horn clauses, subsumption and resolution, then when the students have at least a general idea of what logic programming is all about, get started with Prolog- which, after all, is just one possible logic programming language. Cover what it does best, what compromises it makes and what is good and bad about it. Instill some enthusiasm in the language. It's a really brilliant piece of theory and engineering and there's very good historical reasons why it is the way it is, the good and the bad, and the ugly.

>> My strong suggestion to any programmers reading this: learn and use Prolog.

Absolutely. Even more so- learn logic, and logic programming. Its impact on the way you code and the way you think is going to be huge. Hugely positive, that is.


I was literally hanging out with Doug Miles in a cafe in Seattle in the late 90's. I feel like an idiot in hindsight. He was so patient with me and I must have let him down.

Oh well, the penny has dropped, and better late than never.

I think you're right. I suspect that it would be easier (and likely advantageous) to teach predicate logic first and then introduce Prolog as a kind of straightforward calculator for it, than to teach people imperative programming and then teach them Prolog from that context.

I will say that it's really weird to go from being a Python expert to a Prolog tyro, but you couldn't pay me enough to go back now.

> It's a really brilliant piece of theory and engineering and there's very good historical reasons why it is the way it is, the good and the bad, and the ugly.

Absolutely! Part of the fun has been learning about the rich history and all the research and implementations that have been done.


Have you published any prolog code that demonstrates this? What domains are you working in?

Programming is so diverse that I’m skeptical it applies to everyone, at least without further justification.


> Have you published any prolog code that demonstrates this? What domains are you working in?

Yes, and functional language interpreter, specifically Joy[1]. I implemented a Joy interpreter in Python[2] and had just implemented type inference[3] and begun working towards a compiler when a paper went by here on HN about using Prolog to write compilers[4].

That initiated what I'm calling a "conversion experience" where I dug into Prolog and had the aforementioned epiphany.

The crux came when, after reimplementing the Joy interpreter in just four lines of Prolog, I reimplemented the type inferencer and discovered that it was identical to the interpreter. [5]

Two pages of Prolog code replaced about ten pages of Python code, and the resulting interpreter is more powerful and flexible than the previous implementation.

> Programming is so diverse that I’m skeptical it applies to everyone, at least without further justification.

There are a few non-Prolog-shaped domains, I'll grant you, but they are rare. I coined a new personal rule to the effect of "Use Prolog unless you absolutely can't." YMMV

[1] https://en.wikipedia.org/wiki/Joy_(programming_language) http://www.kevinalbrecht.com/code/joy-mirror/joy.html

[2] http://joypy.osdn.io/

[3] http://joypy.osdn.io/notebooks/Types.html

[4] "Logic Programming and Compiler Writing" Warren 1980 https://www.researchgate.net/publication/220280364_Logic_Pro... https://news.ycombinator.com/item?id=17674859

[5] https://osdn.net/projects/joypy/scm/hg/Joypy/blobs/tip/thun/... This is the published prolog code you asked for, compare to the Python implementation. https://osdn.net/projects/joypy/scm/hg/Joypy/tree/tip/joy/


Have you tried any other logic languages, like Mercury?


Not yet. "I am just an egg." :-)


Thanks for the links! I have two replies for this: (1) I would like to implement a type inferencer in Prolog and (2) This doesn't justify what you originally said (but we don't have to argue that point. I appreciate the response. :) )

-----

(1) Can you elaborate on what you mean by the type inferencer being identical to the interpreter? I see the type inferencer in Python in joy/utils/types.py. And I see your interpreter in Prolog -- but how does that do type inference?

Why am I interested in type inference? I wrote a shell interpreter in 19K lines of Python:

http://www.oilshell.org/release/0.6.pre8/metrics.wwz/line-co... (scroll to the end for line count)

I want to compile it to faster code, and that likely involves type inference in the style of PyPy or ShedSkin (which are both used in production).

Background: http://www.oilshell.org/blog/2018/03/04.html

My understanding is that global type inference for 19K lines of code might be out of reach in terms of speed? As far as I know type inference algorithms are exponential in the worst case. While I'm using a relatively small subset of Python, the type system may still be considered rich (classes with single inheritance, __call__, varargs, perhaps typed dictionaries.)

Do you think Prolog is feasible for this problem? Does it offer you help in terms of debugging and performance problems? Maybe I need some syntax for explicit annotation so I can "help" the backtracking succeed?

I am not exactly sure how it will work. But doing it in Prolog seems somewhat appealing, if I can "prototype" the type system. Though there is the practical problem of simply getting the types into Prolog and back out. I think I would have to generate a prolog program with at least one clause for every variable in my 19K line interpreter?

In any case, thanks for the links to your code. I will be looking at these further.

-----

(2) A recent comment from someone with considerable expertise says that backtracking search just doesn't come up that often:

https://www.reddit.com/r/ProgrammingLanguages/comments/9kb9z...

I seem to recall that Paul didn't just work with Mercury -- he worked on it -- and his Ph.D. work shows that: https://paul.bone.id.au/

I think type inference is a great example of where it DOES come up. But very few people implement type systems (and production type checkers are often in imperative languages like TypeScript, or occasionally OCaml with Hack).

Various types of scheduling may be another place, but I would also count that as a very small proportion of programming work.

Not to mention games, network programming, kernels, etc. I believe you're wildly overstating the applicability of Prolog. A program might have small subproblems that could be expressed in Prolog, but the entire program doesn't fit that paradigm.

Also, I also asked in that Reddit thread for "production" uses of Prolog, and I don't really see it. Erlang moved away from it for "engineering reasons", and they were one of the biggest advocates (I read Joe Armstrong's paper about prototyping Erlang in Prolog). Rust is maybe moving toward it, but they're probably using their own implementation and not an off-the-shelf Prolog interpreter.

I think Prolog ironically falls in the category as Forth (and Joy is not even as practical as Forth). It's incredibly expressive for some things, and it has generated many epiphanies. But if you try to use it in production, you eventually "fall off a cliff" for parts of the program that don't fit the model.

I experimented with both Forth and Joy a number of years ago, and have Manfred von Thun's papers printed out, sitting on my bookshelf. The quadratic formula was one small example of things that are awkward in stack languages, but I think there are others on a larger, architectural scale that are "dealbreakers".

The conclusion I came to about Forth/Joy was that they're worth learning but there's a reason they aren't used much in production. I will probably end feeling the same way about logic programming (like Paul does after 10 years of deep experience), but I still want to learn it for the reasons everyone says.


"I think Prolog ironically falls in the category as Forth (and Joy is not even as practical as Forth). It's incredibly expressive for some things, and it has generated many epiphanies. But if you try to use it in production, you eventually "fall off a cliff" for parts of the program that don't fit the model."

I believe that's correct. It's why AI research usually used LISP with Prolog handling specific things it's good at. Often as a DSL, too. A good example which still sells is Allegro CL:

https://franz.com/products/allegro-common-lisp/

On Prolog itself, there are commercial users outside AI/rules/NLP like this finance company:

https://dtai.cs.kuleuven.be/CHR/files/Elston_SecuritEase.pdf

Far as efficiencies, most Prologs appear to be based on abstract machines that weren't quite optimized for real hardware. Whereas, Aquarius Prolog that Peter Van Roy posted here once seemed to show there's a lot of efficiency to be gained with optimized-for-metal designs:

https://www.researchgate.net/publication/2813183_The_Making_...

So, that's just a few things I have. I never learned Prolog since I think it's too limited. There's quite a few systems that make tasks easy to express, with Prolog being one. I think ideal system would combine high-efficiency in base language, metaprogramming features (or term-rewriting logics), contracts (esp for automated analysis and testing), stuff like Prolog in DSL form, functional programming like Haskell/Mercury, and so on. All designed to interoperate well with C ecosystem and/or using assembly for optimized modules. General idea is you can always use the right tool for the job with base language integrating them smoothly in a way that maximizes productivity during development and performance during deployment.


Ah! Oilshell, I knew there was a reason I remembered your username. :)

I want to preface everything below by pointing out that, while I've been programming for about a quarter of a century, I've been programming in Prolog for about a quarter of a year. It's entirely likely that I haven't encountered the downsides of Prolog in an appreciable way yet.

;-)

Now that I've admitted my ignorance, I'll proceed to pontificate in the traditions of this fine forum. ;-D

> Can you elaborate on what you mean by the type inferencer being identical to the interpreter? I see the type inferencer in Python in joy/utils/types.py. And I see your interpreter in Prolog -- but how does that do type inference?

Um, I can describe what happened. I wrote the Prolog interpreter then I wrote the type inferencer. Reviewing them afterward, I realized that I could rearrange the inferencer code a little bit and to make it identical to the interpreter code. So I deleted it. This is something I discovered, not something I planned.

Basically, Prolog already does what the Python inferencer code does. If you give it logical variables instead of actual stack structures it deduces what they would be, for example:

    ?- thun([dup, *], StackIn, StackOut).
    StackIn = [_9846|_9848],
    StackOut = [_9864|_9848],
    _9846^2#=_9864,
    _9864 in 0..sup .
This shows that the input stack has (at least) one item, which will be replaced by its square, which also implies that the result is between zero and "supremum" i.e. it's positive.

> I want to compile it to faster code, and that likely involves type inference in the style of PyPy or ShedSkin (which are both used in production).

So you want type inference for the Python code? Look for PySonar (you'll have to dig for it, the creator took it down for some reason but copies and forks are floating around.) It's a pretty sophisticated analyzer that can follow Python's dynamical nature.

But if your main goal is just faster performance for your Python code (as opposed to a type inferencer for Oilshell code, which is what I thought you were talking about at first) I would try Cython.

That said, yes, in my limited experience Prolog would be great for this. (I love Python FWIW but personally I would hate to try to type it even using Prolog.)

> I think I would have to generate a prolog program with at least one clause for every variable in my 19K line interpreter?

Oh no, you'd build a kind of symbol table as you go, I'd expect. I'll try to rustle up some references for you tomorrow. I know I've seen some.

> A recent comment from someone with considerable expertise says that backtracking search just doesn't come up that often:

But it's trivial to implement a meta-interpreter to use other search strategies. I took a look a Mercury (and a ton of other systems, there are dozens of variations and hybrids) but I'm sticking to ISO Prolog as much as possible for now. One of the fascinating things about Prolog is how incredibly simple it is (compared to other programming languages.) I want to take it as far as I can until I'm forced to use something additional by some real-world constraint.

(I've got reddit blocked in my hosts file, it's too late tonight but I'll unblock it and read the thread you mentioned tomorrow.)

You mention type inference and scheduling, ...

> Not to mention games, network programming, kernels, etc. I believe you're wildly overstating the applicability of Prolog. A program might have small subproblems that could be expressed in Prolog, but the entire program doesn't fit that paradigm.

Here's the thing about that: I'm coming to Prolog from a context of implementing a Joy interpreter and compiler; I'm working with Joy because it provides a very elegant language/notation for deriving programs (in the mathy Functional Programming way.) My ultimate goal is a system that generates provably-correct machine code (but that's also a lot simpler than the existing toolchains for that purpose.)

Along comes this paper on using Prolog to write compilers and it's sooooo good. It's also just the beginning. Folks kept working:

"Parsing and Compiling Using Prolog" Jacques Cohen and Timothy J. Hickey

"Provably Correct Code Generation: A Case Study" Qian Wang, Gopal Gupta

"From Programs to Object Code and back again using Logic Programming: Compilation and Decompilation" Jonathan Peter Bowen

"Automatic Derivation of Code Generators from Machine Descriptions" R. G. G. Cattell

Etc...

So now I have Prolog, with an embedded Joy interpreter, and in a minute I'll have the compiler. My strong hunch is that this combo will be a big win.

I think the pattern will be inverted: most of a given program can be specified in Prolog(-ish) with Joy to handle nitty-gritty stuff that just doesn't fit well in the Logical Paradigm. For things that really don't fit Prolog nor Joy, well, libraries and extensions. Prolog can load shared libs.

With out going into detail, I should mention that Alan Kay's VPRI "Desktop to the metal in 20,000 LOC" STEPS research project makes extensive use of OMeta parsers, and Prolog DCGs are similar but more powerful.

Also, "Compiling to categories" Conal Elliott http://conal.net/papers/compiling-to-categories/ Prolog+Joy makes this really easy...

> The quadratic formula was one small example of things that are awkward in stack languages

You're talking about "The annoying quadratic formula" article[1]? What do you think of this derivation[2]?

[1] http://www.kevinalbrecht.com/code/joy-mirror/jp-quadratic.ht...

[2] http://joypy.osdn.io/notebooks/Quadratic.html

Cheers!


I spent the day hacking on this toy Python type inferencer in Prolog:

https://github.com/andychu/hatlog/tree/master

It gave me at least a start of an understanding of Prolog.

I watched the Strange Loop talk linked in the README there too. I think some of the downsides he mentioned are right on -- for example, the fact that Prolog will sometimes just spit "false" at you and is hard to debug. And he even fired up the graphical debugger in the talk.

I think this will be a problem for a "production" type inference engine (although technically speaking my project is just a tool for Oil developers, so the bar is lower.)

It's similar to the problem with generated parsers. In theory, it's nice to generate parsers from grammars. In practice, mature parsers are all written by hand, mainly for good error messages. I see the importance of this in the OSH parser, which is large and comprehensive.

Also, I don't think Prolog's data structures are convenient for representing "real" programming languages. As far as I can tell, you basically get what you do in Lisp, and that isn't enough IMO. Algebraic data types would be nicer, and even Java/TypeScript-like objects are better IMO. Oil uses Zephyr ASDL for this purpose.

Still, I would like my type system to be closer to mathematics than ad hoc procedural code. Learning some Prolog (and logic progamming in general) seems like a good way to do that.

Check out the README and the code, and let me know if you have any comments on the issues I raised. We can continue this discussion in e-mail if you like at andy@oilshell.org.


> I watched the Strange Loop talk linked in the README there too. I think some of the downsides he mentioned are right on -- for example, the fact that Prolog will sometimes just spit "false" at you and is hard to debug. And he even fired up the graphical debugger in the talk.

Well, if we omit the dumb mistakes that every programmer makes, and every mistake that dumb programmers make, is the situation with Prolog any worse than, say, trying to grok some inscrutable traceback?

I've seen about a five-to-one "compression" of code, meaning that a line of Prolog seems to be about equivalent to five lines of Python, and sometimes much more. From that alone I can expect a five-fold reduction in bugs, eh? But it also seems to me to be more difficult to introduce bugs in Prolog, and somewhat easier to debug them when they do arise.

Prolog also admits of something called "declarative debugging", cf. https://www.metalevel.at/prolog/debugging

My limited experience has so far been really good. But again, I've only been programming in Prolog for a few months. None of my programs have grown large yet, so perhaps in a year or two I might sing a different tune, eh?

> Also, I don't think Prolog's data structures are convenient for representing "real" programming languages. As far as I can tell, you basically get what you do in Lisp, and that isn't enough IMO. Algebraic data types would be nicer, and even Java/TypeScript-like objects are better IMO. Oil uses Zephyr ASDL for this purpose.

At first this might seem like a handicap, but really it's not, everything you need is there but it's not always easy to see at first. For example, Prolog subsumes the Relational Model, so you already have the functionality of an RDBMS.

Something like ADTs are easy to emulate with "tag" atoms. I don't even know what the technique would be called in Prolog, it's just something you do.

Some Prologs have grown dict-like structures: http://www.swi-prolog.org/pldoc/man?section=bidicts

But in many cases you wouldn't really need them because your facts and rules are already effectively defining them. The data and relations that you want to describe with "real" PLs are better described in the abstractions that Prolog provides, in most cases.

Zephyr ASDL looks interesting but it's also exactly the sort of thing that I would pass over in favor of just using Prolog.

I wrote a parser for the Oberon language in a day and it's not much larger than the grammar rules. I also wrote an assembler for the Project Oberon RISC CPU that's only two pages long, and it's also a disassembler. I started to write the innards of the compiler (connecting the AST to the ASM) and it was dreamy, but I realized I didn't need most of it for Joy so I set it aside. (Joy doesn't use variables so it doesn't require a symbol table. It doesn't use subroutines so it doesn't require stack frame allocation. And so on.)

I freely admit that there are non-Prolog-shaped problems in the world, but I have to insist that they are fewer than they might seem. :-)

> Still, I would like my type system to be closer to mathematics than ad hoc procedural code. Learning some Prolog (and logic programming in general) seems like a good way to do that.

Check out "Automatic Type Inference via Partial Evaluation" by Aaron Tomb & Cormac Flanagan https://galois.com/wp-content/uploads/2014/07/pub_AT_Automat...

From the abstract:

> Type checking and type inference are fundamentally similar problems. However, the algorithms for performing the two operations, on the same type system, often differ significantly. The type checker is typically a straightforward encoding of the original type rules. For many systems, type inference is performed using a two-phase, constraint-based algorithm.

> We present an approach that, given the original type rules written as clauses in a logic programming language, automatically generates an efficient, two-phase, constraint-based type inference algorithm. Our approach works by partially evaluating the type checking rules with respect to the target program to yield a set of constraints suitable for input to an external constraint solver. This approach avoids the need to manually develop and verify a separate type inference algorithm, and is ideal for experimentation with and rapid prototyping of novel type systems.

> We can continue this discussion in e-mail if you like at ...

Cheers! I'm kind of a recluse so please don't take it personally if I don't, but let me recommend comp.lang.prolog and also the SWI-Prolog forum http://groups.google.com/group/swi-prolog


20 years ago : you are right. Although be warned, Prolog has some killer issues, principally the way that the knowledge base and search system works means that you will end up stirring fiddle factors (cut) into your logic program and at scale the brain boggling complexity of what is going on can lead to complete, sudden and irretrievable project failure.

As in : it doesn't work, no one knows why, no one can fix it and no one has a plan that doesn't involve starting from the beginning and doing it differently.

consider two programs

Good prolog :

on(a,b).

on(b,c).

above(X,Y) :- on(X,Y).

above(X,Y) :- on(X,Z), above(Z,Y).

->above(a,c).

You'll get TRUE (yay) a is above c ! GOOOOOOOD prolog.

but now :

on(a,b). on(b,c).

above(X,Y) :- above(X,Z), on(Z,Y).

above(X,Y) :- on(X,Y).

-> above(a,c).

ERROR: Out of local stack

   Exception: (1,675,220) above(a, _4808) ? 
BAD BAD BAD BAD prolog. Go to your room!

ERROR: Out of local stack Exception: (1,675,220) above(a, _4808) ?

Prolog gets stuck expanding the first search until it can't carry on. One line is out of order in the program and you are dead, and the problem is that you will not find that line until that question is asked, and when you find it (in a real example) it really is the devils own job to work out where the expansion is coming and how to fix it.

But there you go - search based logic resolution, brought to you by 1970's computer architecture (small main store) and solvers. Note, not only is "it doesn't bloody work" an issue, but also (probably much worse) "it keeps hanging for five minutes". This inconsistency and problem is hard to replicate and very, very hard to debug. It's caused by an occasional big search happening in the knowledge base, but a big search that doesn't end up exploding the system, it's often accompanied by shifty developer behaviour involving parameters and special cases. In a real project it's a nightmare.

Now : NOOOOOOO! Instead use Problog or something similar and reason using Answer Sets. For your first steps an answer set system looks very like Prolog, but does not suffer from the issues above (as far as I can see). There has not been (so far) a big charge to use this in prod (with Prolog and KBS's there was, it failed, because above) because everyone is playing with 40 layer 200 billion parameter neural networks and wondering why proving they can beat a tiny part of one benchmark or mostly steer a car without killing many people is not translating into application success. And this is a good thing because I don't think that the tech is quite there yet and the community needs five years and about half a dozen big "let's make this practical" applied research projects (btw. keen on partners?) to actually make this manageable for people with deadlines and investors.

But - it's going to help you more than Prolog.


>> This inconsistency and problem is hard to replicate and very, very hard to debug.

My experience is the opposite. Because Prolog is an interpreted langauge, and because it tends to not need the global dependencies other languages have (e.g. no variables declared outside a predicate's closure etc) it's very easy and convenient to program in a "bottom-up" approach.

Which means, in plain English, that everytime you add a new predicate to your program, you can test it immediately on the command line and see whether it "goes infinite" or behaves in other ways strangely. It's kind of like test- driven programming; let's say, REPL-driven programming.

And let's not forget that you can get in an infinite-loop situation with any language. I have fond memories of working at a C# shop, where one line of code in a shared lib brought down the entire stack of web apps deployed by the company, because someone hadn't tested a terminating condition in an iterative loop. The post-mortem went a bit strange and it seems (I wasn't present) that upper management, who didn't know much about programming, advised that iterative constructs like for-loops etc should be henceforth avoided like the plague and recursion used instead. Obviously, nobody even tried to do that.

In another incident, in the same shop, one of the designers caused the in-house, jango-like, templating language, to go into an infinite recursion. Apparently, the jango-like language (called "mango") was Turing complete. Whaddaya know.

Playing with Turing machines is dangerous. They tend to get tied up in loops. You should know that, understand why, and code as to avoid it in any language, not just Prolog.


Most relevant Prolog implementations use a Warren machine like implementation with JIT.


They still have REPLs, which is the relevant part.


Having a REPL doesn't mean the implementation is using an interpreter.

Many languages do have REPLs which JIT before execution.


Fine, 'scuse my imprecision: it's JIT interpreted.

The point is that you can do bottom-up programming at the command line (a.k.a. the "listener" in Prolog).


Yup - if you have control over your code base and can keep it totally consistent you have a fighting chance, but as I say - some shifty person introduces a problem and all goes very very wrong. The big demonstration is that it's really rarely used in industry. Which is a big shame as if we could sort out the programming in the large then I believe big value is on the table. The new generation of reasoners are a good start!


If "some shifty person introduces a problem", then that's bad news for any code base in any language. Nothing specific to Prolog.

> The big demonstration is that it's really rarely used in industry.

By that reasoning, software written in Javascript must be extremely reliable and easy to debug. >.>


That's an unfair metric because prolog was never meant to develope web apps or whatever the majority of use cases is. It's not used in AI either, but I'm glad to hear for the first time that it's being worked on so chances are there could be more I haven't heard of. And there are similar, reduced systems like kanren. I don't know how coq, lean or other theorem provers compare, but the space is hugely interesting.


> above(X,Y) :- above(X,Z), on(Z,Y).

This line alone tells me I'm going to have infinite recursion. That would be a problem in any language, of course, and isn't particularly hard to spot here.


Yeah but no, there are many languages where recursion is not the default solution and recursive versus iterative loops is indeed an often mentioned weirdness, for lack of a better word, at least for beginners, which is the perspective I can speak to. That counts for functional programming, too, where the exact same syntax is used -- so it seems to have its appeal.

Also, I want to note, although you didn't say otherwise, that the recursion may be hidden behind layers of indirection.

The GP writes as if proper testing and code review was unheared of so the issue would be the same in any language, in principle (safe for testing facilities, which weren't the point however).


Interesting. Thank you for sharing that. Gave it a shot on SWI Prolog and got the same error.

It seems like a problem that ought to be solvable with static analysis or a fuzzer -- solvable in the sense that the error is detected before the question is asked. I imagine if Prolog had wider adoption, somebody would have taken a crack at it.


The problem is nontermination. Static analysis of arbitrary programs to detect nontermination is the famous Halting Problem and known to be impossible.

At its core, Prolog combines term unification with a depth-first search strategy. Implement those two primitives and you get a Prolog. But there are problems where depth-first search is not the best approach (e.g. when the search space is potentially infinite), and being able to code a different strategy is important.

Modern languages support limited unification in the form of destructuring assignment. If they could make it bidirectional without also completely confusing everyone used to more traditional programming languages, Prolog wouldn't have many advantages left.


"The problem is nontermination. Static analysis of arbitrary programs to detect nontermination is the famous Halting Problem and known to be impossible."

People reading that might think you can never prove termination of programs automatically. Sounds overstated given the results happening in CompSci. There are a few tools that can automatically prove termination of a given program. AProVe was an older example that used rewriting like a lot of them:

https://www.researchgate.net/publication/2877806_Automated_T...

Terminator is one for C programs:

https://www.microsoft.com/en-us/research/video/automatically...

There's also a site where many tool builders competed in lots of categories:

http://termination-portal.org/wiki/Termination_Competition

Lots of research happens under the "temporal logic" banner where they run it through model checkers, provers, and so on. Anyway, it seems proving termination of useful programs, protocols, or just critical portions is possible in general and doable with current tools for some subset.


Thank you. I wanted to write something along the same vein (albeit much less eloquently!) but didn't get the chance.


You can prove termination in many cases. But there are infinite cases where you can't prove it, and it will terminate.


I'm still learning about how much of a problem that is in practice. I have less data on analyzing infinite-state systems. What both real-time and functional developers said to do was this:

1. Design the main computation or loop to operate on a fixed chunk of data in a way that can be verified in isolation.

2. Design a component that either streams in new data or says there is none. Verify it in isolation.

3. Compose the two. The result will take X proven timing multiplied by Y amount of streamed data for as long as it runs or do nothing if stream is empty.

Shouldn't this handle what you describe in the general case or does that simple strategy fall apart even on common stuff with event loops?


FWIW in the limited time I've been using Prolog that pattern has worked fine for me.

A factor to consider: Prolog is old. It's very unlikely to encounter some problem or issue that ahs not already been addressed, often by a team of researchers.

For example, when I was exploring Prolog I tried a simple transformation on syllogisms.

I found some code online that had them in this form:

    syll(all(M,P), all(S,M), all(S,P)).
    % This is barbara in the 1st figure.

    syll(no(M,P), all(S,M), no(S,P)).
    % This is celarent in the 1st figure.
Etc... I transformed them using term_expansion/2 [1] with this rule:

    term_expansion(syll(A, B, C), (C :- B, A)).
To generate rules like these:

    all(A, C) :- all(A, B), all(B, C).

    no(A, B) :- all(A, C), no(B, C).
Etc... However, these rules (plus a few facts:

    all(dogs, canines).
    all(canines, mammals).
) led to huge open-ended search.

    ?- all(A, B).
    ERROR: Out of local stack

It turns out I can just use call_with_depth_limit/3 [2] to bound the search depth:

    ?- call_with_depth_limit(all(A, B), 10, R).

    A = dogs,
    B = mammals,
    R = 11 ;

    A = dogs,
    B = canines,
    R = 11 ;

    A = canines,
    B = mammals,
    R = 1.
It finds some solutions. Not all Prologs have call_with_depth_limit/3 built-in but it's easy to write a meta-interpreter that does the same thing.[3]

From the POV of security, the trick I want to explore is writing Prolog code to generate provably-correct machine code.

[1] http://www.swi-prolog.org/pldoc/doc_for?object=term_expansio...

[2] http://www.swi-prolog.org/pldoc/doc_for?object=call_with_dep...

[3] https://en.wikipedia.org/wiki/Iterative_deepening_depth-firs...


Did you try using answer sets?

If I swapped statements in most languages the result would also be incorrect. So why are you making a big deal as if order doesn't matter?


The two programs ought to be formally equivalent, in the ideal Prolog model. Usually when you swap two lines in a programming language, the change in semantics is transparent: you've changed the order of operations. But here the change in semantics is implicit. You have to know details about the search algorithm being run under the hood and where those details diverge from the ideal Prolog model. In other words, it's a leaky abstraction over a complex and unreliable machine.


I would call it a leaky abstraction over a simple and reliable machine.

The ideal Prolog model is basically (timeless) predicate logic, eh? I think of Prolog as a useful but not foolproof mechanical calculator for predicate logic, and I'm happy that it works as well as it does with such a simple implementation (chronological backtracking with unification.)

The glass is half-full, rejoice! :-)


Yup, but there's a keg party round the corner, come on!!


I'm too busy building a robotic factory that makes breweries.

"ought to be" under what assumption? Prolog is known not to be purely declarative. If you wish for such behavior use Datalog. If you use a purely declarative language you will run into restrictions that you won't face with Prolog. it's a trade off really.


Can you give one or two examples of where prolog would be more effective than other tools? I was exposed to it in college but never used it beyond that.


Sure! DCG (Definite Clause Grammar)[1] are similar to the OMeta PEG parsers[2] but more powerful and flexible.

Constraint logic programming, e.g. CLP(FD)[3] makes many kinds of problem easy to solve. I wrote what I call the "Boring Sudoku Solver"[4] It's so simple it's dull. It'll solve, validate, and generate Sudoku puzzles.

Interestingly, both of these are considered somewhat newfangled in the Prolog community (despite being ancient by, say, JS standards.)

[1] https://en.wikipedia.org/wiki/Definite_clause_grammar http://www.amzi.com/manuals/amzi/pro/ref_dcg.htm https://www.metalevel.at/prolog/dcg http://www.pathwayslms.com/swipltuts/dcg/

[2] http://www.vpri.org/pdf/tr2007003_ometa.pdf

[3] https://en.wikipedia.org/wiki/Constraint_logic_programming http://pathwayslms.com/swipltuts/clpfd/clpfd.html https://www.metalevel.at/prolog/clpfd http://www.swi-prolog.org/man/clpfd.html

[4] https://swish.swi-prolog.org/p/Boring%20Sudoku.swinb


Recommendations if you're on be JVM primarily?


I think the real problem is that it's a lot more convenient to add logic capabilities to whatever language than to squeeze the entire solution into Prolog's limited view of the world. I know there are extensions and whatnots to build database driven GUI's and web apps in Prolog, but everything I've seen looks like one step to the side and two backwards. Possible does not imply useful. If it was such a great idea, plenty of people would already be doing it by now.


I think you are Partly right, but Logic programming is very useful and elegant for some thing, like this static code analyzer for Clojure built by core.logic: https://github.com/jonase/kibit


Writing Prolog was one of my favorite things in college. It's not the language that appealed to me, but the underlying ability to express and validate logic. It's so clean and delightful. I do wish more people would get introduced to logic this way, it's a very powerful instrument for everyday life.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: