I concluded it didn't deliver on its promise.
The promise was that, as in the article, you 'Say what you want, not how you want it done.'
In practice, once you start writing non-trivial programs, you run into situations where they take ages to run, as Prolog does its search in the background.
So, you have to start using all these languages features to control and optimize the search, such as: https://en.wikipedia.org/wiki/Cut_(logic_programming)
And you end up spending a lot of time reasoning about the how that it said you didn't have to care about.
I concluded that, as a result of performance issues, the core Prolog abstraction 'leaks' too frequently for it to be worthwhile. (https://en.wikipedia.org/wiki/Leaky_abstraction)
Its a great promise, if a smarter language or implementation could achieve it, but overall I concluded that if I was going to have to spend all this time reasoning about the how anyway, I might as well just deal with that up-front, rather than after-the-fact.
I'd love to hear a contrasting opinion (I got pretty good at it, but only over months, not years, of use). But I suspect this is the reason why Prolog didn't make it. It looks good at the start, but it doesn't deliver.
Full-blown end-to-end Prolog programs in production are vanishingly rare, and for good reason. But much more useful are the Prolog-inspired chunks of code dealing with a separable part of a larger problem. Logic-based programming is an appropriate tool for some problems, and it can be applied even in the context of procedural (of course, it's easier in languages that support DSLs).
As an example, clojure's core.logic seems to be pretty popular and that's basically just a subset of Prolog. Somewhat counter-intuitively, Prolog-inspired DSLs tend to more straightforward because they're missing some of the advanced features that Prolog needs to make it usable. The DSL can rely on the host language to do some of the dirty work for it.
Aye, very good advice- also, keep in mind Prolog is essentialy Turing-complete pattern-matching, so like regular expressions but with the ability to describe any program you can hope to compute (and, er, some you can't- see infinite recursion).
That is the promise of declarative programming, in general. Prolog is a logic programming language that is also declarative. Its primary "promise", insofar as it makes any promises, is to allow the use of first-order logic as a programming language. In that, it delivers in spades, more so than any other language (and even other logic programming languages are largely based on Horn clauses).
The reason Prolog is held up as an example of declarative programming is because it's one of the best realisations of that paradigm- or in any case it's one of the few such languages one can do actual programming in (as opposed to, say, xml or html).
Prolog's name however stands for "PROgrammation en LOgique", not "PROgrammation DEclarative".
In that sense it's pretty unfair to blame Prolog for not being a perfect declarative language, or for not keeping any other promises it never made.
The only problem is, the domain in which this is useful is fairly small, and requires an explicitly defined knowledge base, whereas things like machine learning are very flexible and cover a broad variety of domains pretty well. One area I think it holds promise for is in games - imagine characters being able to make deductions about the world around them, and react appropriately - beyond things like behavior trees (which also do not scale once you have many complex rules interacting).
So...in my opinion it is a little bit like SQL. You would not use SQL to write an application (I hope!).
You might want to take a look at Datalog :
Datalog is a declarative logic programming language that syntactically is a subset of Prolog. It is often used as a query language for deductive databases. In recent years, Datalog has found new application in data integration, information extraction, networking, program analysis, security, and cloud computing.
An example of this in real-world use would be in Rich Hickey's Datomic :
Datomic is a distributed database and implementation of Datalog on Clojure. It has ACID transactions, joins, a logical query language—Datalog.
I'm thinking there ought to be something with usefulness comparable to sqlite or at the very least Berkley DB? But I've yet to find it.
Not a bad hunch, but SQL is in a way only half of Prolog.
SQL relations (tables) can be seen as first-order logic predicates that are always true.
Prolog has those (they're called unit clauses, or "facts") but also has relations that are actually implications (called definite clauses, or "rules"). "Rules", being conditionally true (or not) allow programming logic to be described.  
The funny thing is that in Prolog both facts and rules are the same kind of thing (a special kind of first-order predicate called a Horn clause). In, er, fact, so are the queries you write at the command line (negative clauses which are always false, so that you can prove them true by refutation, binding variables in the process; this is how you "retrieve data" in Prolog, where your program is your data).
In this sense Prolog is very different to SQL, which has a clunky syntax on top of a neat abstraction- in Prolog you program with the abstraction itself so, yes, totally: in Prolog you use the database to write an application. Except it's not a SQL databse.
I think that it depends on the domain. Part of my PhD project was making a natural language generator for a Dutch grammar/lexicon. The grammar was developed for a high-coverage parser. Due to Prolog's declarative nature, the grammar, lexicon, and productive lexicon could be used with very few changes. (E.g. in productive lexicon, for parsing one would derive a root and lexicosyntactic information. In generation, you start with a root and underspecified lexicosyntactic information and derive a word with its full lexicosyntactic information.)
One could argue that Prolog was primarily used for data (grammar rules, lexicon). And it is true that people have developed systems that target unification grammar specifically. But in this domain, Prolog gives you a good local optimum: feature structures are trivially converted to Prolog terms, feature structure unification becomes Prolog term unification, Prolog's metaprogramming facilities make it easy to compile/rewrite grammar rules (e.g. in order to exploit first-argument indexing in particular scenarios).
That said, I didn't and wouldn't use Prolog outside that domain and haven't written Prolog since. Also, I think Prolog has serious deficiencies, e.g. it can be quite tricky to write good steadfast predicates.
Weeell, more to the point: in Prolog data is your program so anything you do is data manipulation (or program execution, depending on how you choose to think about it).
It's just fundamentally much harder to get anywhere than with procedural languages.
It's harder to learn than procedural languages. Once you know what you're doing it's much easier to get anywhere with it.
Not anymore . It is, after all, an in-memory, No-SQL database. Even "slow" ish Prologs are pretty fast these days .
If your program is running slowly you need to take care with the way you've written it, but that's true for any language. It's more the case with Prolog and it's harder to optimise because it's harder to get your head around it, but beyond that, it's not "slow" in any sense of the word, particularly when you take into account what it is that it does.
I used prolog professionally quite a bit back in the early 2000s. IBM had a product, Tivoli Enterprise Console, that was used to filter, aggregate, and correlate events from their monitoring toolset at the time. All of the rules for event routing and filtering were written in an extended version of prolog.
The system was quite powerful at the time, and I found rule writing to be really intuitive once I wrapped my head around the concepts.
IBM eventually sunsetted TEC in favor of a similar product they acquired when they bought Netcool, but I'll bet there are still a few organizations out there still using it.
I generally found that while getting Prolog to do cool things was fairly straightforward, getting Prolog to do cool things quickly involved knowing precisely where to put cut operators, and that was basically black magic. But it's been a while.
BTW, our Prolog lecturer wrote a theorem prover in it. That's pretty straightforward; we all wrote theorem provers in it as an exercise; but his theorem prover had a polished MacOS GUI, and that was also written in Prolog. He was writing low-level MacOS stuff in raw Prolog. To this day I still have no idea how.
To add to tom_mellior's much upvoatable reply, I find that the cut is much less painful if you keep your predicates small and mind separation of concerns, as you should anyway.
A good recommendation about when to use the cut operator is "as soon as undesired nondeterminism happens". With short and to-the-point predicates this is much easier to do.
Finally- the Prolog community has a term for two types of cut, the "green cut" and "red cut". A green cut is a cut that simply stops non-productive backtracking, which checks subsequent clauses and fails. A red cut is one that has the potential to significantly change the behaviour of the program by stopping subsequent clauses from being executed even when they could be true.
 Frex, see this very nice resource: "Coding Guidelines for Prolog", Covington, Bagnara, O'Keefe, Wielemaker and Price. Online here: https://arxiv.org/pdf/0911.2899.pdf
I realize you said this was a joke, but still... A compiler in Prolog might look like this:
compile(Input, Output) :-
compile(Input, Output) :-
( parse(Input, AST) % if parsing succeeds...
-> process(AST, Output) % ... then process the AST...
; writeln('syntax error') % ... otherwise report an error
> I generally found that while getting Prolog to do cool things was fairly straightforward, getting Prolog to do cool things quickly involved knowing precisely where to put cut operators, and that was basically black magic.
The cut is indeed complex, and for historical reasons it's taught too early and emphasized too much by almost all Prolog classes and books. For example, most resources would tell you to write the example above as:
compile(Input, Output) :-
!, % commit to this choice if parsing succeeded
compile(_Input, _Output) :-
% we get here by funky implicit control flow if parsing failed
As for writing "low-level" code like GUIs, you just use your Prolog system's foreign function interface....
I am, actually, right now struggling with priority-and-constraint-based register allocation and instruction selection for a compiler backend. It is so the right kind of problem for Prolog.
';' has been in the language since the beginning, and I'm pretty sure '->' was too, but can't say for sure.
Edit: mention SWI Prolog.
 http://ieeexplore.ieee.org/document/6177727/, if you've got an ieee subscription
Of course intros to other languages may gloss over I/O as well, but with mainstream imperative languages (that aren't oriented to defining and querying data), it's more well known or assumed that the standard I/O patterns apply.
You're right about that and I don't think it's a minor thing. One reason is that most tutorials have a hard enough time going through everything you need to know before you can start parsing json and writing out to file, without getting stuck in infinite recursion etc. so the nitty-gritty of day-to-day programming work is often left as an exercise for the reader.
On the other hand, I have the feeling that many people check out Prolog to see how it's different then go looking for the way to do the usual things in it. This may be the wrong approach. Prolog has some advanced features and those are "the whole point" about the language. If all you'll ever need to do is process some csv, there's probably no point in going through all the hard work to learn Prolog- you can do it in another language rather more easily.
That said, the Swi-Prolog homepage has a link to a good tutorial on creating web applications with it that I recommend to anyone interested in that sort of thing:
It's under the "Tutorials" header. Unfortunately my phone's ISP is blocking it so I can't link directly to it. Harumph.
As for Prolog, I get the sense that it is implied that it is an "information/data language", and it is thought a little like how SQL is taught - only, you're only given the SQL parser and building the clustered, relational backend is left as an exercise to the reader, as well as bulk import/export and backup/restore systems.
However, once you start bolting on side effects, like I/O and asserting/retracting facts in the middle of queries, all declarativeness flies out the window and you're left trying to explicitly juggle the implicit branching and control of the program counter to try to keep the system sane, bringing in more and more of the complexities of imperative programming. Optimization also basically requires wresting control of the program counter back from the abstract system into explicit programmer control.
It certainly works as part of practical real-world systems, but I wouldn't necessarily want to write a large, behavior-heavy application purely in Prolog. For large in-RAM data, but light on behavior stuff, fine. There are certain data analyses that are much faster to develop in Prolog than imperatively, and going multi-language can save a lot of programmer time compared to writing everything in $TRADITIONAL_LANGUAGE purely out of inertia.
I'm unsure what this alludes to. Are you talking about minding tail-call optimisation? That's not a feature of Prolog only.
>> going multi-language can save a lot of programmer time compared to writing everything in $TRADITIONAL_LANGUAGE purely out of inertia
A better idea is to chuck out most of your stack and write most of your application in Prolog itself- no xml, no having to worry about Object-Relational Impedance Mismatch, and so on. You don't even need regular expressions, because writing a compiler in prolog with Definite Clause Grammars is a piece of cake (something that can't be said for any other language under the sun).
For instance, see "Can I replace a LAMP stack with Prolog?", here:
Many of the optimizations I've seen are ensuring that no branches are explored that the programmer knows are irrelevant once certain unifications have succeeded, and to control the ordering of branch exploration to try to find faster answers first. Indexing is also manual, and while I haven't looked in a few years, I haven't seen any JIT-style heuristic feedback that lets the system optimize itself for the runtime conditions of Prolog programs.
In my definitions, which some certainly argue against, declarative means you let the CPU do "whatever it takes" in the background, while imperative has you directly managing where the program counter goes, in what order, including explicit thread management (which Prologs that I've seen make you do as well).
OK, you're talking about the cut here. I've made some more comments on it above, but basically I don't really see what the problem is, with it. It strikes me as more of a theoretical point to have to worry about.
Prolog takes a practical approach to both logic programming and the whole declarativeness thing. That is to say, if you can't have efficiency, you sacrifice some purity so that you can have a language that lets you do work. The cut, along with other program flow control predicates (->/0, ;/0 etc) and predicates with side-effects (write/1, get0/1 etc) are normally discussed under "extra-logical facilities" or some such section of textbooks.
We all all understand first-order logic inference is intractable, but we want to run it as a programming language so concessions must be made to material reality. Same goes for every other language- computing efficiently is one half of programming computers (untangling the mess you make is the other half).
This is a very old line of criticism and I don't really see the point of it anymore- not after having learned how to use the cut without hurting myself. So maybe it's just a problem in principle, because in practice it's not such a big deal.
>> Indexing is also manual, and while I haven't looked in a few years, I haven't seen any JIT-style heuristic feedback that lets the system optimize itself for the runtime conditions of Prolog programs.
Swi-Prolog has JIT clause indexing:
The first argument of coumpound terms is indexed, as long as it's atomic- so you do need to understand indexing to make best use of it. I mention a recent case were I made a mess and then untangled it once I realised how I was using indexing wrong.
So, you need to know some programming, right? I don't see that as a bad thing.
>> explicit thread management (which Prologs that I've seen make you do as well).
For multithreaded applications, that's true. I don't know how much of a problem is that in practice because I've never really had to use the facilities, but at least you got immutability to protect you from the usual pain.
These concessions are a bad thing if the goal is to work in a new model of programming distanced from the complexities and bugs of imperative programming. These concessions bring back the same class of bugs and problems, exemplified by "oh, you just have to learn how not to have your program self-destruct by using this low-level operation incorrectly" in a declarative language.
I'd have no problems with replacing replacing green cuts and implicit ordering with separate non-invasive hint declarations, instead of exposing low-level backtracking munging operations which can be used in odd and destructive ways (and we in fact did so in our internal prolog-inspired languages). The presence of these types of operations and constraints also means the compiler often can't reorder clauses if it can (statically or dynamically) determine that one ordering is better than another.
Taking a mostly declarative language and throwing in cut, inline io/assertions/retractions when backtracking can occur, and demanding the execution follow the lexical order of all clauses is basically like throwing raw C pointers into Java: It seems to go against the language's design intent. It's a hack to get things going back when computers were too small & slow to have smarter compilers, and these hacks bring with them otherwise completely unnecessary problems. I don't think they should be defended, but viewed as warts that hopefully the language can design past in the future.
At the end of it: 1) you didn't learn to read or write text to the console. 2) You are unable to make any useful program, let alone a hello world.
This kind of non-practical training materials is a recurring issue with functional languages.
It's a recurring theme when teaching languages to do it through something the author (maybe implicitly) consider a good fit, something that emphasise the beauty of that particular language.
Might be good while you are learning your first language, maybe ?
This explains the IO/side effect heavy imperative tutorials, the structure rich OO tutorials, the 'algorithmic' FP ones, etc. Turned upside down quite clearly tells you which language is strong in a particular area, and complicated/awkward in another.
Myself, I prefer to start with the ugly parts, they contain much more information about the tradeoffs and focus of any particular language.
I fully support the choice of SWI-Prolog by the author of the post. SWI also has good interfaces to other programming languages, I've only used the Java-bridge though but I think Prolog mostly shines if you mix it with another language.
I tihnk a good next step would be researching definite clause grammars and constraint programming as those are the (imo) best use cases for Prolog. If you're mostly interested in constraint solvers I recommend eclipse (not the IDE, very unfortunate naming): http://www.eclipseclp.org/
I think Prolog is a great language to train your brain if that makes any sense. It's quite nice to write a Soduku solver and the like in it even though you can get better performance in other languages. Whenever I write Prolog it takes a bit for my brain to readjust which I feel is a pretty good sign as it means you're working in a different paradigm.
In fact I'd recommend that to anyone, regardless of interest in AI- it's a great programming textbook all around and accessible to anyone who knows at least one of the three languages (coughjavacough).
If anybody wants a challenging and enlightening programming exercise, I recommend writing your own prolog[esque] interpreter.
Surprisingly it still has not been used much in games (well, actually not that surprising given how time consuming and risky it is). I made a stab at  but ended up taking more time than I had budget for. One thing I suspect may be using it is the game 1935 , although they are vague on the details.
The other day I was playing Sid Meier's Civilization and realized that the rule engine for games like that would be a very good fit for a Prolog-like logic programming language (using an embedded interpreter). Prolog would allow adding new rules without touching the old ones, e.g. add a new rule for, say, defensive bonus would be just a single new line in the code base.
A nice book I've read recently is Ivan Bratko's https://ailab.si/ivan/novice.php
Try to grab a 2nd hand one or a library with it on shelves (probably stacking dust). The basics are covered, then graph search, state space, tricks like diff-list (pretty unforeseen use of prolog semantics tbh), a bit of nlp.
If the shortcut is doable at all. The HOL-to-FOL gave me some excitement about this. It just wasn't done entirely in FOL itself.
Anyway, here's the work the result will build on whose description is useful to wider audience:
I'm not certain it's useful in the real world, but I did write a solitaire solver in it that could work with any number of decks and consequent stacks.
Admittedly, Curry is an academic language, not 100% production ready.
Do a ./configure --help, and try to figure out what the best "grade"(s) to build is/are.
I've been meaning to try Mercury for years, but after this, there's no chance I'm going to try writing anything in it. What other crufty unusable insanity have they built in there?
This blog post describes some of them: https://adventuresinmercury.blogspot.co.nz/2011/11/making-gr...
(And Dr. Huch, another person from Hanus’ team, who held that lecture this year, also didn’t mention Curry in it.)
> I am curious, is there a potential date for release of version 1.0.0
> of the Curry programming language in the near future, ie to be
> considered production-ready and feature-complete?
Although we are working for years on Curry, I am careful to release a 1.x version. In the last years, the design stabilized a lot, in particular with the recent switch to 0.9.
However, since Curry intends to be an extension of Haskell, one big thing is missing: type classes. If type classes are added, I think it could be called 1.0. However, for this purpose, we need serious implementations of type classes. We made one approach some time ago but it needs a lot of effort to get it ready with all the tools that exist so far. Now, we are working on another approach to add type classes that might be finished in the next year.
Since the addition of type classes breaks (the type signatures of) existing code, it is not a simple extension.
Sorry that I can't be more specific. At least, we are working on it, but our capacity is limited :-(
 - http://ontop.inf.unibz.it/
 - https://en.m.wikipedia.org/wiki/Answer_set_programming
Thinking in "facts" / binary relations is itself a refreshing approach.
I asked twice on StackOverflow  and had the question declined each time.
This is what the replies in  suggested. Prolog isn't exactly intended for preferring some solutions over others, but it has predictable solution generation that can be used to prescribe priority.
That's a lot of mental work, but the code is succinct. If you want to get by without flow diagrams, your best bet is to generate and optimize a cost function, and that can be done in any language.
But prior to that my plan was to try and write such a system in Prolog (or some kind of embedded logic, like mini kanten).
Note that while Prolog can find all solutions to a combinatorial problem, it can't (obviously) protect you from the complexity of searching all solutions...
I went to a talk about MicroKanran, another logic programming language, which is embeddable in host languages. I think it shows a lot of promise for implementing decision engines. The biggest downside is that it can be tough to reason about the runtime characteristics of nontrivial programs, right about at the point where the paradigm starts to be really useful for modeling complex problems. It takes some experience to know how to write it efficiently.
And can you interface to it using Python or some more mainstream languages?
You can and should use logic programming that way. There are libraries for doing e.g. networking and user interfaces with Prolog, but it's not a particularly good fit for that kind of programming.
Many Prolog implementations are quite easy to embed to other languages and there are logic programming embedded languages for most popular programming languages. miniKanren  is a popular embeddable logic programming language. There's a chapter about implementing a logic programming environment in Structure and Interpretation of Computer Programs (SICP), which is pretty straightforward.
For my AI courses at the uni years ago, I implemented a simple logic programming environment in Haskell . It was a very fun project and it took me just a few days of work.
That's what I used for my post. There's "Shen Professional" which is a closed source subscription based product getting rapid development http://shenlanguage.org/professional.html
I have not used "Shen Professional" though.
The logic programming language that is used most for real world problems is SQL (the database query language).
Now, SQL may be terrible as a logic programming language (in terms of purity, generality, etc) but it's one of those things where because it's there and it's capacities are good enough for a lot of purposes, it gets used more than tools that one has to go out of one's way to find/use.
Further, note that SQL is widely reviled, partly for it's irregularity but also for the complexity that is inherent to large SQL expressions.
It seems likely to me that if Prolog ever gained the popularity of SQL, there would be umpteen OO packages out there promising to "tame the complexity of Prolog" with one or another OO wrapper, an approach which I assume would leave pure logic programmers aghast.
 - https://en.m.wikipedia.org/wiki/Logic_programming
 - https://en.m.wikipedia.org/wiki/Declarative_programming
If you look at my whole post, I hope it's clear I'm not trying to say SQL is formally anything, just that it's effectively a logic programming language.
IE in an ad-hoc sort-of way SQL can accomplish what a more formal logic programming language accomplish, more or less. You specify a world as a set of tuples and determine whether one set of relations determines another set of relations. That lets you do essentially what prolog can do - for example you can map all the example in the parent article to SQL pretty easily.
How would you map the map coloring example? You can model the color table and the neighbor table in SQL, but that's just the data model. What Prolog also gives you is backtracking search. Is there a good way to do that in SQL? I guess it would work by computing the Cartesian product of states and colors and trying a select on that...
> You specify a world as a set of tuples and determine whether one set of relations determines another set of relations. That lets you do essentially what prolog can do
Datalog, maybe, if you have the search mentioned above. Not Prolog, though: Prolog has terms with free variables and unification. Those have no simple "database" interpretation.
The featured article only shows off a small (non-recursive, negation-free) fragment of a Datalog-like language, which itself is only a small (function-free) fragment of Prolog. You can't make broad claims about Prolog from this small set of examples.
Few people know it, even fewer people like it. It tends to be taught really badly. There are not many recent books, and the old ones teach an outdated pre-ISO-standard language and tend to overemphasize use of the cut (!) operator. Practical aspects needed to build "real-world" systems are almost never discussed. (Interfacing with the operating system in various ways, exceptions and other ways of robust error handling, judicious use of mutable data structures when needed, debugging, profiling, optimization, ...)
But yes, if you get over all these hurdles, it's useful for many things.
> And can you interface to it using Python or some more mainstream languages?
Yes, but you might have to go through foreign function interfaces that are painful from both ends. Or, depending on circumstances, you can just use text files (or pipes) to communicate with an external program written in the other language.
It also didn't help that the Japan project to use Prolog a systems language for a so called 5th generation computer didn't work out.
Ans. 1. (1)
Ans 2. I'll tell you after I've debugged and can run a macro I'm writing for this.
Ans 3. nil
For me Lisp was much more appropiate and easy to program. Forward chaining and unification is not enough for certain class of algorithms, sometime you need backward chaining, probabilistic inference and many other ideas that are not easily translated in prolog. I remember some discussion about the semantic of clauses. For example misil(ready), is this about the misil is already ready, or that you are going to prepare the misil, etc. I find that concurrency and paralelism in prolog are not easy. In the book "on Lisp" by P. Grahan there is a prolog interpreter in Lisp, this can be interesting for somebody wishing to learn prolog.
Edit,Added, erlang was inspired by prolog, so learning prolog will help you learn erlang, but if you are a ruby type elixir is your best option.
For those of you looking for prolog out in the real world, a surprising number of languages contain embedded interpreters for it. Mostly Lisps.
Perhaps most famously, picolisp uses a tiny, lisp-syntax prolog called pilog as a query languages for its built in database. Yes, it has a built in database. Picolisp is weird.
Also, Smalltalks (seriously).
If I could condense the experience of those last few years with Prolog in one sentence it would be this one: "Prolog is not your friend".
There's a bunch of stuff you (or at least, I) never have to think about in day-to-day programming work, such as depth-first search and how it can go infinite. Then you need to get some experience with your chosen Prolog interpreter and learn its idiosyncracies .
There are rewards. [plug] My Masters dissertation  is a novel algorithm that learns a grammar from unannotated text. Because it produces a grammar in Prolog's Definite Clause Grammars notation, and because Prolog can interpret DCGs as first-order logic theories, these grammars are also logic programs that model the algorithm's input. I process some text, spit out a grammar and then without needing a separate compiler I run this grammar backwards to generate new strings, or the normal way to parse them. [/plug]
There's a point in the Prolog learning curve where you risk baldness from pulling out your own hair. Once past that point... it keeps hurting you, but the things you can do with it, it's bit like magic. Or maybe it's Stockholm syndrome :)
I've seen many criticisms of Prolog (some that are repeated in this thread). Eight years in I haven't seen one single line of criticism that still makes sense after you've used the language for a few years.
So to me the real problem with Prolog is that it hurts you so much that most people give up before they really figure it out.
Oh and- should you pick Prolog up never forget this: The dynamic database is evil.
Just a few days ago I was working on something I thought was best tackled using dynamic predicates (written to the database) rather than an in-memory list. My program took hours to process some data, until I noticed I was querying a compound term for its second argument. Just switching arguments brought the processing time down to five minutes on the same data. Did I mention the dynamic db is evil?
I ended up using it a few times since then in a professional setting to handle filling out extremely complex forms.
I even build a module on CPAN to handle creating Prolog facts from Perl data structures.
"Visual Prolog, also formerly known as PDC Prolog and Turbo Prolog, is a strongly typed object-oriented extension of Prolog. As Turbo Prolog it was marketed by Borland, but it is now developed and marketed by the Danish firm Prolog Development Center (PDC) that originally developed it. "