Look, I get what the article is saying, but the fact of the matter is that
Prolog has to run on real-world computers and those are not purely logical.
They have working memories and storage, they have processors that take in
instructions one-by-one and so on. It's all nice and well to say that you want
a purely declarative language but, well, we just don't have the hardware for
that sort of thing, at least not yet anyway (and not in the foreseeable
future, given that nobody is really working on that- not anymore, not after
the Fifth Generation Computer programme tanked).
So compromises have to be made. They have to be made at every step of the way.
Compromises like the selection rule for resolvents (this is what makes the
order of clauses in your program matter, when it doesn't in pure logic).
Compromises like having extra-logical predicates, like write/1 and get/1 (the
latter gets the next byte from the current input- what is a "byte" in logical
terms? It isn't). Compromises like the cut, !/0 and even that stupid fail/0,
that give you some control over the interpreter's program flow, which in and
of itself is something completely outside of logic.
But, this is how you get a language that is 90% of the way to the purely
declartive, logic programming ideal. You gotta be, whatsitcalled, pragmatic,
I also don't agree that teaching the operational semantics of the language
(the four-port model) is the best way to reveal the beauty of the language.
I've been coding in Prolog since around 2008, and I was very comfortable with
that model, but my understanding is still incomplete. I've recently read
"Foundations of Inductive Logic Programming" that has a big intro to logic and
logic programming with endless proofs about subsumption and the soundness of
resolution and so on, and it's like, I was blind in one eye and could only see
half the world. You gotta have a very solid grasp of both the theory and the
practice to code well in Prolog.
... and there's your problem, right there. The investment you have to make to
really master the language is huge and very few people have the resources for
What's the payoff?
For me, the extra effort I put into learning Prolog has actually paid off in career advacement. First, because several potential employers noticed that I listed Prolog in my CV as one of the programming languages I know (a CV is where one is required to brag about what they know, right?) and definitely considered it a factor in their decision to at least interview me, or hire me (I know because they told me). I could of course just list Prolog in my CV without knowing any, but I could have been found out, and I'm a wuss so I'd never do that.
Second, because my interest in Prolog led to an interest in AI, because a majority of Prolog textbooks are also (Good, Old-Fashioned) AI textbooks and reading them to learn Prolog led me to learn about AI, and eventually take a Master's course in, well, actually, machine learning and data science, right around the time when it was exploding in popularity.
And third, because my interest in Prolog combined with my Master's in machine learning led me to apply for a PhD at one of the highest-rated technology research universities in the UK (and in the world- Imperial College), to study Inductive Logic Programming, which is basically machine-learning Prolog programs from Prolog programs, under the supervision of the guy who, literally, wrote the book.
Prolog has taken me places- places I'd never have been without it. Lucrative places, and prestigious places. Anyway you look at it, for me at least, it was a worthy investment in time and effort.
For me, a work day in Prolog looked like this: Write your idea down in 1 hour and then spend the remaining 7 hours figuring out where to place the cuts :)
That is, the dynamic/static divide is a distraction. Easy to talk towards, but likely nowhere near as important as other factors.
Duck typing and Python large ecosystem does tend to exacerbate that though
Yeah. That was me. A friend of mine tried to clue me in to Prolog about twenty years ago and I didn't "get it".
Recently I revisited it and realized that about 1/3 of my working career was wasted by not using Prolog.
My strong suggestion to any programmers reading this: learn and use Prolog.
If it helps, I remember a senior programmer of mine once saying that out of 35 people in his Master's class, nobody really got it.
It's the way that Prolog is taught at uni. I think, because there is so much material to cover and it's very hard to know where to start, most courses end up teaching whatever the tutors think are the most important bits. Most end up teaching it as any other language, starting from the syntax and going to examples of "how to do such-and-such in Prolog" - like, how to loop over a list etc.
That is how I learned Prolog myself and it took me, I don't know, ten years? Before I could really get about half of what was going on. I think a much better approach would be to start with the theory of logic programming- spent two or three courses covering propositional and first order logic, then Horn clauses, subsumption and resolution, then when the students have at least a general idea of what logic programming is all about, get started with Prolog- which, after all, is just one possible logic programming language. Cover what it does best, what compromises it makes and what is good and bad about it. Instill some enthusiasm in the language. It's a really brilliant piece of theory and engineering and there's very good historical reasons why it is the way it is, the good and the bad, and the ugly.
>> My strong suggestion to any programmers reading this: learn and use Prolog.
Absolutely. Even more so- learn logic, and logic programming. Its impact on the way you code and the way you think is going to be huge. Hugely positive, that is.
Oh well, the penny has dropped, and better late than never.
I think you're right. I suspect that it would be easier (and likely advantageous) to teach predicate logic first and then introduce Prolog as a kind of straightforward calculator for it, than to teach people imperative programming and then teach them Prolog from that context.
I will say that it's really weird to go from being a Python expert to a Prolog tyro, but you couldn't pay me enough to go back now.
> It's a really brilliant piece of theory and engineering and there's very good historical reasons why it is the way it is, the good and the bad, and the ugly.
Absolutely! Part of the fun has been learning about the rich history and all the research and implementations that have been done.
Programming is so diverse that I’m skeptical it applies to everyone, at least without further justification.
Yes, and functional language interpreter, specifically Joy. I implemented a Joy interpreter in Python and had just implemented type inference and begun working towards a compiler when a paper went by here on HN about using Prolog to write compilers.
That initiated what I'm calling a "conversion experience" where I dug into Prolog and had the aforementioned epiphany.
The crux came when, after reimplementing the Joy interpreter in just four lines of Prolog, I reimplemented the type inferencer and discovered that it was identical to the interpreter. 
Two pages of Prolog code replaced about ten pages of Python code, and the resulting interpreter is more powerful and flexible than the previous implementation.
> Programming is so diverse that I’m skeptical it applies to everyone, at least without further justification.
There are a few non-Prolog-shaped domains, I'll grant you, but they are rare. I coined a new personal rule to the effect of "Use Prolog unless you absolutely can't." YMMV
 https://en.wikipedia.org/wiki/Joy_(programming_language) http://www.kevinalbrecht.com/code/joy-mirror/joy.html
 "Logic Programming and Compiler Writing" Warren 1980
 https://osdn.net/projects/joypy/scm/hg/Joypy/blobs/tip/thun/... This is the published prolog code you asked for, compare to the Python implementation. https://osdn.net/projects/joypy/scm/hg/Joypy/tree/tip/joy/
(1) Can you elaborate on what you mean by the type inferencer being identical to the interpreter? I see the type inferencer in Python in joy/utils/types.py. And I see your interpreter in Prolog -- but how does that do type inference?
Why am I interested in type inference? I wrote a shell interpreter in 19K lines of Python:
http://www.oilshell.org/release/0.6.pre8/metrics.wwz/line-co... (scroll to the end for line count)
I want to compile it to faster code, and that likely involves type inference in the style of PyPy or ShedSkin (which are both used in production).
My understanding is that global type inference for 19K lines of code might be out of reach in terms of speed? As far as I know type inference algorithms are exponential in the worst case. While I'm using a relatively small subset of Python, the type system may still be considered rich (classes with single inheritance, __call__, varargs, perhaps typed dictionaries.)
Do you think Prolog is feasible for this problem? Does it offer you help in terms of debugging and performance problems? Maybe I need some syntax for explicit annotation so I can "help" the backtracking succeed?
I am not exactly sure how it will work. But doing it in Prolog seems somewhat appealing, if I can "prototype" the type system. Though there is the practical problem of simply getting the types into Prolog and back out. I think I would have to generate a prolog program with at least one clause for every variable in my 19K line interpreter?
In any case, thanks for the links to your code. I will be looking at these further.
(2) A recent comment from someone with considerable expertise says that backtracking search just doesn't come up that often:
I seem to recall that Paul didn't just work with Mercury -- he worked on it -- and his Ph.D. work shows that: https://paul.bone.id.au/
I think type inference is a great example of where it DOES come up. But very few people implement type systems (and production type checkers are often in imperative languages like TypeScript, or occasionally OCaml with Hack).
Various types of scheduling may be another place, but I would also count that as a very small proportion of programming work.
Not to mention games, network programming, kernels, etc. I believe you're wildly overstating the applicability of Prolog. A program might have small subproblems that could be expressed in Prolog, but the entire program doesn't fit that paradigm.
Also, I also asked in that Reddit thread for "production" uses of Prolog, and I don't really see it. Erlang moved away from it for "engineering reasons", and they were one of the biggest advocates (I read Joe Armstrong's paper about prototyping Erlang in Prolog). Rust is maybe moving toward it, but they're probably using their own implementation and not an off-the-shelf Prolog interpreter.
I think Prolog ironically falls in the category as Forth (and Joy is not even as practical as Forth). It's incredibly expressive for some things, and it has generated many epiphanies. But if you try to use it in production, you eventually "fall off a cliff" for parts of the program that don't fit the model.
I experimented with both Forth and Joy a number of years ago, and have Manfred von Thun's papers printed out, sitting on my bookshelf. The quadratic formula was one small example of things that are awkward in stack languages, but I think there are others on a larger, architectural scale that are "dealbreakers".
The conclusion I came to about Forth/Joy was that they're worth learning but there's a reason they aren't used much in production. I will probably end feeling the same way about logic programming (like Paul does after 10 years of deep experience), but I still want to learn it for the reasons everyone says.
I believe that's correct. It's why AI research usually used LISP with Prolog handling specific things it's good at. Often as a DSL, too. A good example which still sells is Allegro CL:
On Prolog itself, there are commercial users outside AI/rules/NLP like this finance company:
Far as efficiencies, most Prologs appear to be based on abstract machines that weren't quite optimized for real hardware. Whereas, Aquarius Prolog that Peter Van Roy posted here once seemed to show there's a lot of efficiency to be gained with optimized-for-metal designs:
So, that's just a few things I have. I never learned Prolog since I think it's too limited. There's quite a few systems that make tasks easy to express, with Prolog being one. I think ideal system would combine high-efficiency in base language, metaprogramming features (or term-rewriting logics), contracts (esp for automated analysis and testing), stuff like Prolog in DSL form, functional programming like Haskell/Mercury, and so on. All designed to interoperate well with C ecosystem and/or using assembly for optimized modules. General idea is you can always use the right tool for the job with base language integrating them smoothly in a way that maximizes productivity during development and performance during deployment.
I want to preface everything below by pointing out that, while I've been programming for about a quarter of a century, I've been programming in Prolog for about a quarter of a year. It's entirely likely that I haven't encountered the downsides of Prolog in an appreciable way yet.
Now that I've admitted my ignorance, I'll proceed to pontificate in the traditions of this fine forum. ;-D
> Can you elaborate on what you mean by the type inferencer being identical to the interpreter? I see the type inferencer in Python in joy/utils/types.py. And I see your interpreter in Prolog -- but how does that do type inference?
Um, I can describe what happened. I wrote the Prolog interpreter then I wrote the type inferencer. Reviewing them afterward, I realized that I could rearrange the inferencer code a little bit and to make it identical to the interpreter code. So I deleted it. This is something I discovered, not something I planned.
Basically, Prolog already does what the Python inferencer code does. If you give it logical variables instead of actual stack structures it deduces what they would be, for example:
?- thun([dup, *], StackIn, StackOut).
StackIn = [_9846|_9848],
StackOut = [_9864|_9848],
_9864 in 0..sup .
> I want to compile it to faster code, and that likely involves type inference in the style of PyPy or ShedSkin (which are both used in production).
So you want type inference for the Python code? Look for PySonar (you'll have to dig for it, the creator took it down for some reason but copies and forks are floating around.) It's a pretty sophisticated analyzer that can follow Python's dynamical nature.
But if your main goal is just faster performance for your Python code (as opposed to a type inferencer for Oilshell code, which is what I thought you were talking about at first) I would try Cython.
That said, yes, in my limited experience Prolog would be great for this. (I love Python FWIW but personally I would hate to try to type it even using Prolog.)
> I think I would have to generate a prolog program with at least one clause for every variable in my 19K line interpreter?
Oh no, you'd build a kind of symbol table as you go, I'd expect. I'll try to rustle up some references for you tomorrow. I know I've seen some.
> A recent comment from someone with considerable expertise says that backtracking search just doesn't come up that often:
But it's trivial to implement a meta-interpreter to use other search strategies. I took a look a Mercury (and a ton of other systems, there are dozens of variations and hybrids) but I'm sticking to ISO Prolog as much as possible for now. One of the fascinating things about Prolog is how incredibly simple it is (compared to other programming languages.) I want to take it as far as I can until I'm forced to use something additional by some real-world constraint.
(I've got reddit blocked in my hosts file, it's too late tonight but I'll unblock it and read the thread you mentioned tomorrow.)
You mention type inference and scheduling, ...
> Not to mention games, network programming, kernels, etc. I believe you're wildly overstating the applicability of Prolog. A program might have small subproblems that could be expressed in Prolog, but the entire program doesn't fit that paradigm.
Here's the thing about that: I'm coming to Prolog from a context of implementing a Joy interpreter and compiler; I'm working with Joy because it provides a very elegant language/notation for deriving programs (in the mathy Functional Programming way.) My ultimate goal is a system that generates provably-correct machine code (but that's also a lot simpler than the existing toolchains for that purpose.)
Along comes this paper on using Prolog to write compilers and it's sooooo good. It's also just the beginning. Folks kept working:
"Parsing and Compiling Using Prolog" Jacques Cohen and Timothy J. Hickey
"Provably Correct Code Generation: A Case Study" Qian Wang, Gopal Gupta
"From Programs to Object Code and back again using Logic Programming: Compilation and Decompilation" Jonathan Peter Bowen
"Automatic Derivation of Code Generators from Machine Descriptions" R. G. G. Cattell
So now I have Prolog, with an embedded Joy interpreter, and in a minute I'll have the compiler. My strong hunch is that this combo will be a big win.
I think the pattern will be inverted: most of a given program can be specified in Prolog(-ish) with Joy to handle nitty-gritty stuff that just doesn't fit well in the Logical Paradigm. For things that really don't fit Prolog nor Joy, well, libraries and extensions. Prolog can load shared libs.
With out going into detail, I should mention that Alan Kay's VPRI "Desktop to the metal in 20,000 LOC" STEPS research project makes extensive use of OMeta parsers, and Prolog DCGs are similar but more powerful.
Also, "Compiling to categories" Conal Elliott http://conal.net/papers/compiling-to-categories/ Prolog+Joy makes this really easy...
> The quadratic formula was one small example of things that are awkward in stack languages
You're talking about "The annoying quadratic formula" article? What do you think of this derivation?
It gave me at least a start of an understanding of Prolog.
I watched the Strange Loop talk linked in the README there too. I think some of the downsides he mentioned are right on -- for example, the fact that Prolog will sometimes just spit "false" at you and is hard to debug. And he even fired up the graphical debugger in the talk.
I think this will be a problem for a "production" type inference engine (although technically speaking my project is just a tool for Oil developers, so the bar is lower.)
It's similar to the problem with generated parsers. In theory, it's nice to generate parsers from grammars. In practice, mature parsers are all written by hand, mainly for good error messages. I see the importance of this in the OSH parser, which is large and comprehensive.
Also, I don't think Prolog's data structures are convenient for representing "real" programming languages. As far as I can tell, you basically get what you do in Lisp, and that isn't enough IMO. Algebraic data types would be nicer, and even Java/TypeScript-like objects are better IMO. Oil uses Zephyr ASDL for this purpose.
Still, I would like my type system to be closer to mathematics than ad hoc procedural code. Learning some Prolog (and logic progamming in general) seems like a good way to do that.
Check out the README and the code, and let me know if you have any comments on the issues I raised. We can continue this discussion in e-mail if you like at email@example.com.
Well, if we omit the dumb mistakes that every programmer makes, and every mistake that dumb programmers make, is the situation with Prolog any worse than, say, trying to grok some inscrutable traceback?
I've seen about a five-to-one "compression" of code, meaning that a line of Prolog seems to be about equivalent to five lines of Python, and sometimes much more. From that alone I can expect a five-fold reduction in bugs, eh? But it also seems to me to be more difficult to introduce bugs in Prolog, and somewhat easier to debug them when they do arise.
Prolog also admits of something called "declarative debugging", cf. https://www.metalevel.at/prolog/debugging
My limited experience has so far been really good. But again, I've only been programming in Prolog for a few months. None of my programs have grown large yet, so perhaps in a year or two I might sing a different tune, eh?
> Also, I don't think Prolog's data structures are convenient for representing "real" programming languages. As far as I can tell, you basically get what you do in Lisp, and that isn't enough IMO. Algebraic data types would be nicer, and even Java/TypeScript-like objects are better IMO. Oil uses Zephyr ASDL for this purpose.
At first this might seem like a handicap, but really it's not, everything you need is there but it's not always easy to see at first. For example, Prolog subsumes the Relational Model, so you already have the functionality of an RDBMS.
Something like ADTs are easy to emulate with "tag" atoms. I don't even know what the technique would be called in Prolog, it's just something you do.
Some Prologs have grown dict-like structures: http://www.swi-prolog.org/pldoc/man?section=bidicts
But in many cases you wouldn't really need them because your facts and rules are already effectively defining them. The data and relations that you want to describe with "real" PLs are better described in the abstractions that Prolog provides, in most cases.
Zephyr ASDL looks interesting but it's also exactly the sort of thing that I would pass over in favor of just using Prolog.
I wrote a parser for the Oberon language in a day and it's not much larger than the grammar rules. I also wrote an assembler for the Project Oberon RISC CPU that's only two pages long, and it's also a disassembler. I started to write the innards of the compiler (connecting the AST to the ASM) and it was dreamy, but I realized I didn't need most of it for Joy so I set it aside. (Joy doesn't use variables so it doesn't require a symbol table. It doesn't use subroutines so it doesn't require stack frame allocation. And so on.)
I freely admit that there are non-Prolog-shaped problems in the world, but I have to insist that they are fewer than they might seem. :-)
> Still, I would like my type system to be closer to mathematics than ad hoc procedural code. Learning some Prolog (and logic programming in general) seems like a good way to do that.
Check out "Automatic Type Inference via Partial Evaluation" by Aaron Tomb & Cormac Flanagan https://galois.com/wp-content/uploads/2014/07/pub_AT_Automat...
From the abstract:
> Type checking and type inference are fundamentally similar problems. However, the algorithms for performing the
two operations, on the same type system, often differ significantly. The type checker is typically a straightforward
encoding of the original type rules. For many systems, type
inference is performed using a two-phase, constraint-based
> We present an approach that, given the original type rules
written as clauses in a logic programming language, automatically generates an efficient, two-phase, constraint-based
type inference algorithm. Our approach works by partially
evaluating the type checking rules with respect to the target program to yield a set of constraints suitable for input
to an external constraint solver. This approach avoids the
need to manually develop and verify a separate type inference algorithm, and is ideal for experimentation with and
rapid prototyping of novel type systems.
> We can continue this discussion in e-mail if you like at ...
Cheers! I'm kind of a recluse so please don't take it personally if I don't, but let me recommend comp.lang.prolog and also the SWI-Prolog forum http://groups.google.com/group/swi-prolog
As in : it doesn't work, no one knows why, no one can fix it and no one has a plan that doesn't involve starting from the beginning and doing it differently.
consider two programs
Good prolog :
above(X,Y) :- on(X,Y).
above(X,Y) :- on(X,Z), above(Z,Y).
You'll get TRUE (yay) a is above c ! GOOOOOOOD prolog.
but now :
above(X,Y) :- above(X,Z), on(Z,Y).
ERROR: Out of local stack
Exception: (1,675,220) above(a, _4808) ?
ERROR: Out of local stack
Exception: (1,675,220) above(a, _4808) ?
Prolog gets stuck expanding the first search until it can't carry on. One line is out of order in the program and you are dead, and the problem is that you will not find that line until that question is asked, and when you find it (in a real example) it really is the devils own job to work out where the expansion is coming and how to fix it.
But there you go - search based logic resolution, brought to you by 1970's computer architecture (small main store) and solvers. Note, not only is "it doesn't bloody work" an issue, but also (probably much worse) "it keeps hanging for five minutes". This inconsistency and problem is hard to replicate and very, very hard to debug. It's caused by an occasional big search happening in the knowledge base, but a big search that doesn't end up exploding the system, it's often accompanied by shifty developer behaviour involving parameters and special cases. In a real project it's a nightmare.
Now : NOOOOOOO! Instead use Problog or something similar and reason using Answer Sets. For your first steps an answer set system looks very like Prolog, but does not suffer from the issues above (as far as I can see). There has not been (so far) a big charge to use this in prod (with Prolog and KBS's there was, it failed, because above) because everyone is playing with 40 layer 200 billion parameter neural networks and wondering why proving they can beat a tiny part of one benchmark or mostly steer a car without killing many people is not translating into application success. And this is a good thing because I don't think that the tech is quite there yet and the community needs five years and about half a dozen big "let's make this practical" applied research projects (btw. keen on partners?) to actually make this manageable for people with deadlines and investors.
But - it's going to help you more than Prolog.
My experience is the opposite. Because Prolog is an interpreted langauge, and
because it tends to not need the global dependencies other languages have
(e.g. no variables declared outside a predicate's closure etc) it's very easy
and convenient to program in a "bottom-up" approach.
Which means, in plain English, that everytime you add a new predicate to your
program, you can test it immediately on the command line and see whether it
"goes infinite" or behaves in other ways strangely. It's kind of like test-
driven programming; let's say, REPL-driven programming.
And let's not forget that you can get in an infinite-loop situation with any
language. I have fond memories of working at a C# shop, where one line of code
in a shared lib brought down the entire stack of web apps deployed by the
company, because someone hadn't tested a terminating condition in an iterative
loop. The post-mortem went a bit strange and it seems (I wasn't present) that
upper management, who didn't know much about programming, advised that
iterative constructs like for-loops etc should be henceforth avoided like the
plague and recursion used instead. Obviously, nobody even tried to do that.
In another incident, in the same shop, one of the designers caused the
in-house, jango-like, templating language, to go into an infinite recursion.
Apparently, the jango-like language (called "mango") was Turing complete.
Playing with Turing machines is dangerous. They tend to get tied up in loops.
You should know that, understand why, and code as to avoid it in any
language, not just Prolog.
Many languages do have REPLs which JIT before execution.
The point is that you can do bottom-up programming at the command line (a.k.a. the "listener" in Prolog).
> The big demonstration is that it's really rarely used in industry.
This line alone tells me I'm going to have infinite recursion. That would be a problem in any language, of course, and isn't particularly hard to spot here.
Also, I want to note, although you didn't say otherwise, that the recursion may be hidden behind layers of indirection.
The GP writes as if proper testing and code review was unheared of so the issue would be the same in any language, in principle (safe for testing facilities, which weren't the point however).
It seems like a problem that ought to be solvable with static analysis or a fuzzer -- solvable in the sense that the error is detected before the question is asked. I imagine if Prolog had wider adoption, somebody would have taken a crack at it.
At its core, Prolog combines term unification with a depth-first search strategy. Implement those two primitives and you get a Prolog. But there are problems where depth-first search is not the best approach (e.g. when the search space is potentially infinite), and being able to code a different strategy is important.
Modern languages support limited unification in the form of destructuring assignment. If they could make it bidirectional without also completely confusing everyone used to more traditional programming languages, Prolog wouldn't have many advantages left.
People reading that might think you can never prove termination of programs automatically. Sounds overstated given the results happening in CompSci. There are a few tools that can automatically prove termination of a given program. AProVe was an older example that used rewriting like a lot of them:
Terminator is one for C programs:
There's also a site where many tool builders competed in lots of categories:
Lots of research happens under the "temporal logic" banner where they run it through model checkers, provers, and so on. Anyway, it seems proving termination of useful programs, protocols, or just critical portions is possible in general and doable with current tools for some subset.
1. Design the main computation or loop to operate on a fixed chunk of data in a way that can be verified in isolation.
2. Design a component that either streams in new data or says there is none. Verify it in isolation.
3. Compose the two. The result will take X proven timing multiplied by Y amount of streamed data for as long as it runs or do nothing if stream is empty.
Shouldn't this handle what you describe in the general case or does that simple strategy fall apart even on common stuff with event loops?
A factor to consider: Prolog is old. It's very unlikely to encounter some problem or issue that ahs not already been addressed, often by a team of researchers.
For example, when I was exploring Prolog I tried a simple transformation on syllogisms.
I found some code online that had them in this form:
syll(all(M,P), all(S,M), all(S,P)).
% This is barbara in the 1st figure.
syll(no(M,P), all(S,M), no(S,P)).
% This is celarent in the 1st figure.
term_expansion(syll(A, B, C), (C :- B, A)).
all(A, C) :- all(A, B), all(B, C).
no(A, B) :- all(A, C), no(B, C).
?- all(A, B).
ERROR: Out of local stack
?- call_with_depth_limit(all(A, B), 10, R).
A = dogs,
B = mammals,
R = 11 ;
A = dogs,
B = canines,
R = 11 ;
A = canines,
B = mammals,
R = 1.
From the POV of security, the trick I want to explore is writing Prolog code to generate provably-correct machine code.
The ideal Prolog model is basically (timeless) predicate logic, eh? I think of Prolog as a useful but not foolproof mechanical calculator for predicate logic, and I'm happy that it works as well as it does with such a simple implementation (chronological backtracking with unification.)
The glass is half-full, rejoice! :-)
Constraint logic programming, e.g. CLP(FD) makes many kinds of problem easy to solve. I wrote what I call the "Boring Sudoku Solver" It's so simple it's dull. It'll solve, validate, and generate Sudoku puzzles.
Interestingly, both of these are considered somewhat newfangled in the Prolog community (despite being ancient by, say, JS standards.)
 https://en.wikipedia.org/wiki/Constraint_logic_programming http://pathwayslms.com/swipltuts/clpfd/clpfd.html https://www.metalevel.at/prolog/clpfd http://www.swi-prolog.org/man/clpfd.html