For me, the big problem with Prolog (and the whole concept of declarative languages) is that the intended benefit requires an unreasonably (or impossibly?) smart compiler.
The concept works when I can declare what I want to be done, and the system does it - and when that happens, Prolog is great, the language is great for declaring what I want to be done.
However, often it happens that the system does it in a way that's somehow horribly inefficient and makes it totally unusable. And then I have to redeclare my requirements in a slightly different way to nudge the system into doing it differently - and this is much harder, then I have to worry about much more moving parts than just my code.
Also, the language is not really well suited for that; if I have to specify how exactly in which order the calculations need to be made, then imperative languages are a much better tool. I'm throwing away all the advantages of Prolog if I have to do this all the time - and in practice I do.
Haskell has a bit of similar problems (though generally not with unexpected speed complexity but unexpected memory complexity through laziness and thunks), but Prolog is much worse in that regard.
> For me, the big problem with Prolog (and the whole concept of declarative languages) is that the intended benefit requires an unreasonably (or impossibly?) smart compiler.
I'd say... sufficiently smart. We've been writing SQL for decades without such complaints and it works rather well in practice. Sometimes you have to introspect your queries and interact with the underlying interpreter to understand the performance constraints but it is much better than the alternative! Imagine having to write every query as a procedure... what a headache and the duplication!
Although learning to write well-performing queries does, I think, take some understanding of the underlying theory driving relational databases and SQL... which can be true for Prolog as well.
I'm not really sure why Prolog isn't more popular. Any time I've written declarative DSLs for a solver engine it always feels like I'm re-inventing an under-specified sub-set of Prolog.
After all the calculation of computer programs is the syntactic manipulation of predicates, is it not? Maybe if we all started with first-order logic and predicate calculus a language like Prolog would appear to be more practical. I think for a majority of programmers however it can seem a bit alien as we're trained to think in terms of procedures and steps rather than invariants and predicates.
We've been writing SQL for decades without such complaints and it works rather well in practice.
Speak for yourself. My experience of SQL is that it requires careful tuning whenever the numbers involved grow beyond the trivial. Query plans need inspecting, and often indexes aren't used appropriately. Queries are written and rewritten several different ways to encourage the database to choose one plan or another.
If your use of a database is limited to key-value lookups and object graph navigation with an ORM, with <100 values on any given edge, you'll have a fine time, I'm sure. Not everybody uses databases that could trivially be replaced with an object graph.
For more complex operations, I'd like to be able to write in the form of a plan with a graph of data flow directly. Scan this table, filter against this index, sort these two data sources and do an merge anti-join, etc. Today, I need to rewrite the query while knowing the planner well enough to foresee how it's going to implement my query, gradually homing in on my target through a series of indirect inspections.
I use PostgreSQL quite heavily at present and do not use ORMs. I'd guess, probably optimistically, that 80% of the non-trivial queries I write are easily optimized by the query planner and require no further thought from me. In my experience these aren't the majority of the queries I write. The final 20% of these queries are the hard ones that, as you say, require some work and re-writing to get the query planner to choose an appropriate plan.
At least I find the tooling in PostgreSQL to be more than ample to assist in optimization tasks.
The spirit of my example of SQL is that we already use declarative programming languages in mainstream systems. Even with recursive queries and common table expressions I think, and I may be wrong, that Prolog is probably more general-purpose and expressive than SQL. And yet it's not nearly as popular for some reason.
Well, but SQL is actually a quite good illustration of the reasons why Prolog isn't popular.
I mean, you technically can write all of application functionality in many dialects of SQL, but it's generally considered a bad idea to do so for obvious reasons - SQL is good for some tasks, but for everything else other languages are preferred.
In a similar manner, even in tasks ideally suited for Prolog's strengths (e.g. reasoning systems) that part of the task will be relatively small - in any practical system, the "boring" code of integrating everything with everything else, handling a graphical UI, mangling data from one format to another, wrapping it in a REST web API and god knows what else will be at least 3-10 times larger than the core part of that task. I recall an old project where we had the core algorithms in Prolog, but the surrounding app in Java - the fact that the Java code was much more than 20 times bigger wasn't a verbosity issue, it actually did 10 times more stuff; although it was quite boilerplate, it all was needed and it was clearly simpler/faster/cheaper to make it in Java back then rather than do everything in Prolog. It's not sufficient for a language to make the hard (or domain specific) things possible, it also needs to make all the easy things easy, and Prolog really does not.
So the language either needs to be truly general purpose (and "more general-purpose than SQL" isn't sufficient) or easily integrated with other languages. There are successful examples - for example, tensorflow->python, where again the actual ML part for anything more than proof-of-concept is much, much smaller than the surrounding general purpose python code; or SQL, which is well integrated into other languages.
In another post you mention "I've written declarative DSLs for a solver engine" - why it wasn't trivial (compared to writing a new DSL) to include some Prolog code as the DSL inside the app as easily as people include SQL in their apps as the DSL for managing data? There's your answer for why it's not nearly as popular.
> My experience of SQL is that it requires careful tuning whenever the numbers involved grow beyond the trivial.
Depending on what you mean by "trivial", that can be syllogistically true, but if so, then a LOT of useful work can and is done in the realm of the 'trivial numbers involved'.
Great point. An elementary feature of a SQL engine is exposing the query plan itself in a compact humanly readably format. Is there an equivalent for Prolog implementations?
>We've been writing SQL for decades without such complaints
People have to do exactly the "State what I want done in another way to nudge the DB into a performant path" that he was talking about all the time in SQL.
But SQL is a specific domain where it is (probably) worth it.
> We've been writing SQL for decades without such complaints and it works rather well in practice.
SQL, at least prior to recursive CTEs, isn't Turing complete, and mitigates the problem, but the need to understand the evaluation strategy of a particular interpreter (which is more involved than understanding the unification algorithm underlying prolog) has been a common complaint about SQL for a long time.
But SQL makes it worthwhile by still being the most convenient existing method to talk to many databases that have desirable properties. Prolog doesn't currently have a recognized niche where it's the best language that is as fundamental as the one SQL has.
> specify how exactly in which order the calculations need to be made
There is a language called Mercury that is an extension of a subset of Prolog, i.e., it takes some useful Prolog stuff away but adds some other useful stuff of its own. One of its additions is a mode system; essentially, a simple way to declare how you intend to use your predicates. You can then write your program once, putting the goals in clauses in any order, and the Mercury compiler will use the mode system to figure out all possible data flows and generate specialized, optimized code for each of them.
I like a lot of Mercury's ideas, but as a Prolog fan I find it does throw some things out that I rather like. If you're less attached to Prolog, you might like Mercury more :-)
I definitely agree that a sufficiently smart compiler and/or fast interpreter is an extremely important asset when working with Prolog, and also that it takes almost superhuman effort to write such compilers.
However, at least in my experience, "nudging" the system in the right direction is much easier with Prolog than with imperative languages:
With Prolog, all it takes is often simply reordering a few goals, which some systems even perform automatically for you (see YAP, and some parts of SWI-Prolog for examples, in particular the RDF framework). There is also typically much less code to rewrite in the first place. Critically, the ability to reorder your goals depends on your working in the pure subset of the language. If you leave this subset, then also automatic or manual optimizations are orders of magnitude harder to apply.
Regarding Haskell and Prolog, my experience also differs significantly: From my experience and observation, the consequences of lazy evaluation are much harder to understand than Prolog's implicit search mechanism and backtracking.
>> Also, the language is not really well suited for that; if I have to specify how exactly in which order the calculations need to be made, then imperative languages are a much better tool. I'm throwing away all the advantages of Prolog if I have to do this all the time - and in practice I do.
I'm going to kindly ask you to give at least a couple of different examples of this, because it's another of those criticisms of Prolog that have no real basis in actual practice of programming in the language.
My source for that is that I've been programming all my side projects and two dissertations in Prolog since 2007 and I've never been in a situation where I had to think long and hard about how to order my predicates. Most of the time the correct ordering is pretty straight-forward.
This criticism, about ordering of clauses, has its historical root in the competition between Prolog and PLANNER - because in PLANNER predicate order did not matter, which made the language more "pure" but also less efficient (which in turn is why many fewer people have heard of PLANNER than have heard of Prolog). The real point about it is not that you may have to pull your hair out a couple of times while programming. The point is that in first-order logic, as in maths, predicate order doesn't matter and therefore, in a pure first-order logic language, it shouldn't matter either. The criticism is that Prolog lacks purity, in other words.
Which, if I may be so bold, is nonsense on stilts. The real world is not pure and you can't has purely anything languages. People who complain about Prolog's lack of purity are quite prepared to throw away the baby of a very useable, 99% declarative language, allegedly because the bathwater of the missing 1% is really messing their codes up. I'm just not convinced by that to be honest.
Be fair! There are many unfair criticisms directed at Prolog in this thread, but this one does have a basis in fact. You'll admit that there are huge practical differences between
You're right that "in actual practice" of someone who has been programming Prolog for ten years, "the correct ordering is pretty straight-forward" in many cases. But that doesn't mean that the order doesn't matter! It just means that you are experienced enough, and you have no problem with the fact that you have to specify the order.
Well, I didn't say that clause ordering doesn't matter! Of course it matters. And yeah, left-recursion is a pain (because depth-first search etc). What I'm saying is that in principle this may be a problem for purists, but in practice it's no big deal.
I don't think it has much to do with experience, or at least you don't need to be an advanced user to be aware of how clause ordering affects your results. So, yes, you need to understand a few things about how the language works before you can program anything non-trivial, but that's true for any language, isn't it?
Critics, mostly in the past, latched on to the few niggling non-declarative impurities like this to write off the entire language as "not truly declarative" when there isn't really anything usable that's closer to the ideal. Talk about binary logic...
Edit: Glad to see another user around here btw. Check out my new library. Just finished it and am looking for eyes on :)
> Well, I didn't say that clause ordering doesn't matter!
The OP complained about "[having] to specify how exactly in which order the calculations need to be made", and you said that this complaint had "no real basis in actual practice" and "I've never been in a situation where I had to think long and hard about how to order my predicates". You did kind of dismiss their criticism.
BTW, you keep talking about clause ordering, where goal ordering within clauses is the more difficult issue, I think.
> I don't think it has much to do with experience, or at least you don't need to be an advanced user to be aware of how clause ordering affects your results.
Based on my experience as a teaching assistant in a Prolog univesity course for a few years, I would say that it does have to do with experience, and beginners often get clause and goal order wrong. It's true that soon they become aware that ordering affects the behavior of the program, but they often don't understand how it affects the program, so they semi-randomly reorder things until they find a permutation that seems to work.
> So, yes, you need to understand a few things about how the language works
Yes! But we collectively haven't figured out how to teach this well, and many people are left with a terrible first impression of Prolog and don't continue to a point where they understand enough to be effective.
> Check out my new library.
From the Readme it looks nice. I think iterm_value/3 should maybe be called iterm_nth/3, which is more idiomatic. Also, since you mention GNU Prolog's array library, maybe also add if your interface is compatible with it, and if not, why not? I think more standardization of libraries across implementations would be better for the Prolog community than more fragmentation.
I think I dismissed the criticism about goal ordering being a game-stopper, not about it being an issue at all. That's what I tried to say anyway. I'm concerned that miserly nitpicking like this only serves to give programmers a good excuse to not even try to pick up Prolog.
>> BTW, you keep talking about clause ordering, where goal ordering within clauses is the more difficult issue, I think.
Yeah, sorry about that. I often refer to goals in the body of a predicate as clauses. I'm not sure if that's entirely wrong but it might be a bit confusing.
>> Based on my experience as a teaching assistant in a Prolog univesity course for a few years, I would say that it does have to do with experience, and beginners often get clause and goal order wrong. It's true that soon they become aware that ordering affects the behavior of the program, but they often don't understand how it affects the program, so they semi-randomly reorder things until they find a permutation that seems to work.
Isn't that how everything gets done? :)
I don't have any experience with teaching the language but I do understand it's a very hard subject to teach. I do remember that my first serious attempt at coding in Prolog was unbelievably frustrating. It took me a week to write a measly little predicate to get the next element of a list- because I really didn't understand what I was doing. I basically didn't need to do anything in the first place, I could have done what I wanted with member/2. But this was really not obvious to me from the descriptions of member/2 (or anything else) so I spent a week tracing my program and trying to figure out what the hell it was doing.
I'm used to the pain though, because I'm dumb-as-bricks and everything I've ever tried to learn, I had to really struggle through. So I persevered and now I'm a happy long-time user (that doesn't mean I don't still hurt, often). I understand why smarter students with a lower pain threshold would just give up on Prolog.
I was unhappy with the way Prolog was taught in my degree course. It was mostly "here's the syntax, here's some examples, go figure out the semantics on your own". Which is completely inappropriate for a language that's 99% semantics and basically has almost no syntax.
On the other hand, I think, as students, we had all been collectively spoiled by Java and Python and so on. If most languages are easy to pick up but hard to master, a language that's hard to pick up _and_ master is not going to be very popular.
>> From the Readme it looks nice. I think iterm_value/3 should maybe be called iterm_nth/3, which is more idiomatic. Also, since you mention GNU Prolog's array library, maybe also add if your interface is compatible with it, and if not, why not? I think more standardization of libraries across implementations would be better for the Prolog community than more fragmentation.
Thank you! I appreciate this. Those are good suggestions, particularly the one about following GNU Prolog's interface. You're absolutely correct about fragmentation and I'll try to follow your advice- but indexed_terms is not my array library yet! It's a precursor to that. I'm working on the actual array library now, based on indexed_terms. I just put indexed_terms out there hoping for some early feedback.
It is of course completely true that the order matters a great deal in this case. In fact, in this case, the order of goals matters much more than students typically realize! In my experience, students who write down and then run the first version frequently walk away with the impression that "Prolog is slow". But in fact this is a performance problem only in the widest sense of the word: This is rather a termination problem.
Luckily, there is a powerful way to detect such problems in Prolog, based on program slicing. The trick is to narrow down the program to those fragments that exhibit the same problem.
For example, let us start with the first program and one fact for parent_of/2:
Rather insidiously, from a quick first test, the program even seems to work as intended:
?- ancestor_of(X, Y).
X = Y ;
X = a,
Y = b .
With the following query, we get to the core of the problem:
?- ancestor_of(X, Y), false.
Nontermination!
Now the point: I can systematically remove some aspects of the program by simply removing goals and even entire clauses. For example, what about this fragment, where I have commented out a few parts:
This fragment by itself already does not terminate:
?- ancestor_of(X, Y), false.
Nontermination!
The point is: No pure goal you add after the single goal, and no pure clause you add to this program can prevent this nontermination! It will always stay there unless you insert new constraints (goals) before the goal, or change the clause altogether.
The possible application of such reasoning is a rather unique property of Prolog. In fact, I know of no other programming language that even comes close to admitting such a general and easily applicable mechanism for reasoning about termination properties and other aspects!
More holds: Such reasoning can be automated! It is comparatively easy to write a Prolog program that systematically eliminates goals and clauses for you, and reasons about the resulting fragments. Some kinds of nontermination can even be automatically detected (the general problem is of course undecidable).
A few practical guidelines for writing efficient and especially terminating Prolog programs can also be derived from such considerations.
Those aren't declarative, search-for-answers types of languages in the style Prolog is. It's currently hard for most people to get high-performance code out of a Prolog for average problem in programming. Whereas, once you understand dataflow or SIMD tricks, you can get quite a bit of mileage out of the other stuff you mentioned that runs circles around imperative, sequential code.
>> It's currently hard for most people to get high-performance code out of a Prolog for average problem in programming.
Hi nick.
To my experience, this is not at all the case. Prolog was "slow" in the '70s, when nothing was as fast as C. Nowadays, it's not just the case anymore. Try a modern Prolog compiler like YAC, with tabling and everything.
What kind of high-performance code were you trying to write, that didn't go as fast as you like? If it's something interesting I'm all for helping out.
I mean in the sense of high-performance applications. I'm sure it's fast enough for average. Since other commenter mentioned SIMD, I'm going straight to examples to see what the speed is like. Good test cases would include a key-value store, web server of at least lighttpd complexity, game like Quake (esp interested in fps), top Prolog vs imperative engines for parsing/NLP, MP3 player, and so on. Stuff that taxes even imperative programs for speed. I'd love to see Prolog do those with similar speed (esp no pauses on real-time).
Either of you have examples of such things?
EDIT: And what's the story on concurrency in terms of safety and scaling?
> However, often it happens that the system does it in a way that's somehow horribly inefficient and makes it totally unusable. And then I have to redeclare my requirements in a slightly different way to nudge the system into doing it differently - and this is much harder, then I have to worry about much more moving parts than just my code.
Why would you expect this not to happen for complex programs? Declarative languages are never going to have a compiler so smart you can completely forget about optimising. They let you get certain tasks done easily when performance isn't essential and when optimal performance is essential you need to know what's going on underneath.
Prolog execution is essentially navigating a massive search tree. There's always going to be plenty of ways to prune the tree and optimise traversal.
Yeah, I'd like to see a system that let me specify the operational semantics, when I want to pin them down (probably because the planner isn't doing a good enough job, but maybe I have unusual operational requirements). And ideally then check what I've specified against my declared intent.
That lets me work quickly while my needs aren't great and while "the compiler is sufficiently smart", but when I hit a wall it gives me an obvious path forward that doesn't involve obscuring what I'm trying to do.
Perhaps the compiler could be optimized using deep learning. Isn't basically the problem that it has to go through a search space that's too large to find an ideal solution?
My opinion: Humans are good at thinking sequentially, imperatively. Other language paradigms are harder.
That isn't to say that Functional languages (Lisp, Haskell, Scala, etc) aren't as good; frankly, I like them better. There's just a mental gap that has to be crossed and for most developers I've met, that can be challenging. Why do things in a challenging way when I've got Java right here and it works just fine? (straw man, not my own view)
Prolog (logic programming) is a bigger gap, imho. It takes more effort for me to really understand Prolog code. Can do some beautiful things with it, but it's easier to have a few good developers be good at it and put their hard work behind a library/API than it is to have every other developer try to get over that gap.
Part of the problem may just be that languages like Prolog are still leaky abstractions at this point—you still have to consider how it's doing what it's doing, performance consequences etc. In principle, you should be able to use such a language to merely describe what it is you want (and let the language find/construct it), while with an imperative language you have to specify precisely how it will be produced.
Seems like the other difficulty is that its 'interface' (programmer facing portions of the language) derive from mathematical ideas which will already be very familiar to those working in the field, so it's very convenient for them, while for those outside mathematics (and more theoretical parts of CS) there will be surprising gaps when attempting to learn because of the implicit mathematical concepts in the interface.
For instance, Prolog is based on Horn clauses[0], a subset of first order predicate logic. Additionally, 'relations' are a central part of the language's interface—and if you have a background in pure math, this is great for you because it immediately tells you all kinds of things; if you don't, it's going to be confusing because much of the literature will assume you have similar experience reasoning about relations.
Seems like it would be possible to move those concepts out of the interface, while still using them in the internals...
I've been quite fascinated by answer set programming, because unlike prolog its clauses are unordered which give the language the opportunity to significantly optimise execution - though at the cost that, as with haskell, it's often quite tricky to reason about what you just did that caused a performance issue.
I think you are right despite the backlash you are getting.
In college I got deep enough into Prolog to write programs where cuts were required. Later, I got excited about miniKanren. Now I've been looking into constraint programming, and what I didn't understand until recently is that it basically generalizes the Prolog techniques to handle any kind of equation solving / relational search approach. You can do amazing stuff with these systems (e.g. look at HAL http://users.monash.edu/~mbanda/hal/), including write custom search algorithms (consider classic Prolog unification just 1 search strategy on a limited domain).
But I don't think there is any getting around the fact that this stuff gets conceptually harder, as it gets more powerful. The idea of solution sets as potentially infinite relations, rather than functionally determined things, is very powerful but there is an abstraction price.
And to the extent people can wrap their heads around it, there is a "letting go" in not writing programs in the style of deterministic algebraic manipulations. Part of this may reflect a bias, but the magic of delegating solution-finding to an algorithm is also dangerous. Are you comfortable not knowing how many answers there might be if you let the program keep running?
Similarly, consider Prolog's negation as failure. You can't express many formulas that you might like to involving "NOT". Negation is interpreted as a program just not returning anything. There are important reasons for this model, but again, it isn't necessarily as easy a model where NOT can be used freely.
BTW not many people seem to know that Japan had a huge, Manhattan-style program in the 80's. They made Prolog their lower tier.. like their assembly language, or close to it (this was back when people were still thinking different computing platforms needed different hardware). Some people ultimately blamed Prolog for what is generally regarded as the failure of this "fifth generation" project to leapfrog Western tech. But I think Prolog suffered unduly as a result. In fact that project was trying to to a bunch of ambitious things and they all hung on one another. For example, they were trying to make speech the UI, with 80's tech...
All this said I would encourage anyone to explore Prolog, miniKanren, Mercury, Shen or anything of that ilk.
Your post is full of good insights, and I also highly recommend these pointers and environments!
I only want to briefly comment on the question "Are you comfortable not knowing how many answers there might be if you let the program keep running?", which I think is well worth thinking about. If you really think about this, then the most interesting programs you can write typically have this property, because they search for things that we do not even know exist, such as new theorems, some structures with unique properties, or even mistakes and race conditions in programs!
Also, in my experience, the more you focus on declarative properties and fundamental principles such as termination, the easier it gets to apply Prolog in practice. The difficulties I have seen many beginners struggle with often arise from trying to reason about Prolog programs in the way they reason about imperative programs, which indeed gets too hard to do in practice very quickly. In contrast, if you think in terms of generalizations and specializations, and program fragments, you can reason much more easily about Prolog programs in practice. However, it requires that you stay in the pure monotonic core of the language, of which constraints are also a subset.
The promise of Prolog was that you'd be able to just define the task requirements, then write the requirements in Prolog and magic would happen. But even the most experienced Prolog coders I know don't really think in Prolog. When faced with a coding task, they inevitably first (mentally) figure out the problem in an imperative form, perhaps with some recursion, and then they they ask themselves "okay, how do I convert that to logic and pattern-matching?"
If you're going to go through that process, then Prolog provides no value: it's just an extra step, and you might as well code in C (okay, Lisp).
I fear that Haskell will turn out the same way. After all monads are just monoids in the category of endofunctors, right?
Interesting - I know a number of people who think in prolog, and when faced with a task that it's well suited for I often mentally figure out the problem in prolog form, and then ask myself "okay, how do I convert that to normal code?"
Happens with lisp as well - I barely ever write prolog or lisp but I regularly think in them first before writing the solution in another language.
This is really dependent on the programmers background and what they're writing. Some code is super easy to write in Prolog and much harder in any other languages. Take type inference algorithms for example. Rather than describing how to solve it, you just translate the typing rules directly to Prolog clauses. Thinking about imperative problems is hard in Prolog and will make you think backwards. Thinking about logic problems in C is the same way.
I second this. When writing Prolog programs, the more you think procedurally, the more specific your programs will become in the sense that you can then typically only use them in specific directions.
To make your Prolog code as general as you can, I recommend to also think in terms of relations. This definitely takes effort, which pays off in the increased generality.
I suspect that very experienced Prolog developers 'grok' it. They get it, they think in it. Whether that's due to a lot of hours of practice in it, or just having a brain that works in a way that Prolog makes more sense, I'm not certain.
I think prolog was a bit ahead of its time. Just like Lisp was ahead of its time in the 80s. Just like Erlang was ahead of its time but much more viable now.
My issue with Prolog is not being able to easily conceptualize 'simpler things' -- like parsing text-based data structures, processing arguments, reacting to errors at various levels of interactions with OS or Databases or or other (micro) services.
I believe, overtime as we learn how to define programs in ways that allow to confirm program correctness, models like Prolog's will become more and more prevalent.
Because, in my subjective view, proving correctness of declarative expressions is much simpler and more effective than proving correctness of 'implementation directives'.
I think you are right. I wondered the same thing about functional programming ala ML, LISP, etc. about 15 years ago. Today, "functional" is all the range and the paradigm is finding its way into mainstream languages. I think the same will happen for logic programming ala Prolog.
That said, I have seen a lot of hostility towards rule-based systems in the last 2-3 years ... it is nonsensical IMHO.
> My opinion: Humans are good at thinking sequentially, imperatively.
It's more basic than that. The underlying CPU is unabashedly sequential and imperative. It even uses (shock, horror) bare GOTOs, i.e. conditional and unconditional jump instructions.
These other models might be more mathematically elegant in some abstract sense, sure, but I'd rather work with the underlying hardware than fight against it.
If you think that functional and logic-oriented programming languages "fight against" the hardware, you may have missed a few decades of developments in efficient compilation and interpretation of such languages. Hint: Internally, they use (shock, horror) GOTOs just like other programming languages, to the same effect.
The major ones usually target an abstract machine or intermediate language which is implemented in low-level ways. Those abstract or intermediate languages help the logic and function oriented languages compile easier. I'd love to see them go straight from high-level Haskel or Prolog to LLVM bytecode with LLVM's optimizations handling the rest. That would be interesting in light of your comment. Instead, the intermediate, abstract forms remain out of utility or necessity.
LLVM bitcode is too low-level for many of the optimizations, such as deforestation and similar optimizations of functional programs. It is useful for later, low-level things, and there is a good LLVM backend for GHC.
At the same time, LLVM is too high-level for some other things, such as Prolog's particular brands of stack unwinding and indirect jumps on backtracking.
Is that the interface you'd like to offer others? I love Prolog and its cousin datalog for authentication systems. It makes them extensible, offers user-comprehensible delegation semantics---of course I'm likely to implement that in something else.
don't know. My first language was C++. For years actually. Then I learned Clojure, went to University, learned Prolog, Haskell etc.. and essentially turned out to become a FP geek. But it was a difficult road to climb. :-)
I don't think there's really a "functional gap". All generally useful programming languages including Prolog are imperative "in the end" no matter what the books say, and anyone good at using them will think about the programs imperatively. The language just has "special features" such as ones that go like "try to match this with that and examine each possibility separately". If you try to write Prolog by "describing" the solution instead of thinking about the steps to achieve it, the code will likely not work at all (infinite recursion) or be very inefficient, in spite of being logically correct.
What may look like a "functional gap" may just be the lack of ability for abstract thought, and more correlated with (in)ability to write well structured and generally "nice" code, especially to identify common patterns and create reusable code. Because functional/logical is just an implementation of some abstract pattern that is believed to be useful, and the person may have trouble understanding this pattern just as they would have problems understanding existing abstractions or creating their own abstractions.
I think you're over-simplifying things if you say that not understanding Prolog programs comes down to a "lack of ability for abstract thought". You can think abstractly all you want, that will not help you if you don't know by heart the sometimes arcane rules of setting choice points and binding variables. Those are not things that you can work out on your own, thinking abstractly; those are concrete rules chosen by people over a period of 40 years.
The converse is true, of course: A person that cannot write well structured and "nice" code in other languages won't be able to magically do that in Prolog either.
That's why it is such a good thing to use a true multi-paradigm language. The power resides in being able to express your problem using the paradigm that makes you solve it in the best way.
For example you can implement a subset of Prolog inside Common Lisp, and thus you can employ the following paradigms as you wish:
For me, in year 2017, a programming language that implements only one paradigm, for example Haskell (almost a purely functional language), can't be considered a "general-purpose" language. Quite the opposite, it is a very specialized language, and it will work great for the problem domains where such paradigm fits perfectly into.
> A straw man is a common form of argument and is an informal fallacy based on giving the impression of refuting an opponent's argument, while refuting an argument that was not advanced by that opponent.
> Why do things in a challenging way when I've got Java right here and it works just fine?
How is that a straw man? If an individual were to say that, in what way are they refuting an argument you didn't advance?
The "straw man" in the straw man fallacy is the argument that nobody is making. The comment about "Java is right here" is not a straw man fallacy -- but it is a straw man.
I'm sorry, genuine question, I'm still not seeing how that's a straw man. How is "Java is right here" an argument nobody is making? He's using it as an argument himself to question the value of working with a functional language, no?
Sorry -- more precisely a "straw man" is usually an argument which you do not advocate yourself, that you construct -- usually in order to argue against it and strengthen your own position.
When the author of the comment says "straw man, not my own view" in parentheses, he is saying "hey, this is what I imagine somebody who likes Java and not functional languages might think".
I have a few ideas, but the main one is that: independent of the language/semantics, languages only really take off because of a "killer app" use case where they are required. Once that's in place, tooling is built to make it much easier to complete projects in.
As a few examples: Objective-C was only popular for years because it was required to write iOS apps. Ditto JavaScript being the only way to write for web. Ruby only got popular after Rails. C++ was the blessed way to write for Windows in the 90's. Java had a giant marketing budget, "write once, run everywhere," and was looking to be the best way to write for the web (lol applets). This isn't a perfect explanation, but in many of these cases, it wasn't about eager-vs-lazy, control flow constructs, variations in type systems, or anything related to what the language offered you, it was primarily necessity to be on the platform of your choice.
After that, tooling evolved, and they became easier to write major projects in. Why write a web app in Erlang when the JVM has every major templating system, an implementation of CommonMark, several high-performance JSON libraries, model validation, several mature build systems, and thousands of Stack Overflow answers?
This makes it hard for languages like Crystal or Nim to take off, but ON TOP OF THAT Prolog is asking its devs to completely change how they approach programming.
What would it take to make Prolog take off? A killer app. Which, in the 80's, looked like it was AI :-p
"... for a quite generous definition of "worked"."
Barely. All kinds of people learned it quickly. They built web applications. It became a dominant web, application language. The momentum led to many improvements in its ecosystem and deficiencies. Prolog failed to do... any of that at PHP's scale.
A language like PHP greatly increases the chance of security problems. There is ample evidence of such issues in practice. This is one consequence of the quite low-level way to reason about the available data, and a class of problems that does not arise so easily in Prolog.
This is just one of many instances where barely working solutions are used, partly stemming from a lack of alternatives at that time. Robust web frameworks are only now becoming gradually available in Prolog!
PHP wasn't designed for secure web applications. Most of its users don't care about that. It thrives anyway. So, this isn't a failure of PHP. On contrary, the author cranking it out quickly might have been a reason for success.
Now, someone wanting secure apps might not want PHP. They traditionally went with "safe" languages such as C# or Java whose runtimes were full of 0-days. They might find Ada, Component Pascal, or Rust with a web framework helpful. Even enterprise sector hasn't been writing most web pages is Ada, though. ;)
Why are you downvoting what I wrote? You are arguing against statements I did not make.
PHP worked for many people in what I can only consider a quite generous definition of "worked", in tandem with countless security problems that arise almost necessarily from its low-level data representation. I am not arguing that this has impeded its adoption, that most users care about this, or that it does not "thrive". These are not signs that it works in the way I prefer software and languages to work. In particular, for writing secure web applications, I cannot recommend PHP.
In the future, I expect to see more and more Prolog web applications. The necessary frameworks are now becoming available, with much better safety properties.
You just repeated the statement Im arguing against: "generous definition of worked." The goal of PHP was to help people, esp non-experts, quickly build web sites. People off all skill levels managed to build web sites that did what they intended in the intended use case. As in, PHP worked for the exact thing it was designed for. It also went beyond that by getting massive adoption by people cranking out too many sites go count. Its success at its goal is legendary in a good or bad way depending on who you ask.
Next, you talk about the preference for secure, web apps. Being a high-assurance, security engineer, I'd expect you to immediately start talking about Ada, Opa w/ modified back end, SWIFT on SIF, Ur/Web, Haskell with secure framework... languages that are systematically dedigned to either make classes of vulnerabilities impossible or reduce them greatly. Also, esp Ada or SPARK whose low-level libraries are written in same language for as much safety as possible. Instead, you counter insecurity of PHP use by recommending a Prolog not designed for security with low-level components probably written in C or C++ since most Prologs are. It's probably gotten almist no pentesting to knock out low-hanging fruit either. If those are true, then that's highly hypocritical that you'd smear PHP while recommending something so insecure or uncertain in its security.
Far as downvotes, it was because you made a bogus claim about PHP's success with no substantiation. I certainly didnt downvote you cuz Im anti-censorship, voted to eliminate downvotes on Lobsters, and it's impossible to downvote a person replying to you. Check that claim by looking for downvote option next to my name. I instead prefer to counter bullshit with facts in actual comments. Also, with citations when I have time. As is clear by this one.
Thank you for your explanation! I do not in the least dispute that the goal of PHP was to help people, or that people managed to build web sites for the intended use cases. In fact, if one of PHP's goals was to allow a great number of unintended security flaws in user programs, then this was also splendidly achieved. Personally, I only cannot bring myself to call a site with security flaws "working", and when using PHP, these flaws are simply more easily made than with Prolog in my experience. Note that I need not even go as far as making any statements about particular implementations of Prolog or PHP, only about user programs written in the two languages.
Please also note that I cited quasiquotations as one important advantage of Prolog. As for citations, the most relevant publication for this feature is:
This feature lets you easily build safe template engines in Prolog. Please let me know what you think, if you have time. I am in fact frequently looking for a pentester when building Prolog-based websites, are you interested in such a project?
Having done some work on static analysis of PHP applications, I agree that the language is very poorly designed. It seems the designers add any feature that they think will make it easier to write code, regardless of its effects on readability or security. It's an ongoing problem, too. One might have had some sympathy for an inexperienced language designer who pushed something out into the world because he thought it was cool, but then when people pointed out the problems with it, studied language design and learned to make better decisions. That's not happening with PHP; even some relatively recent design decisions are just mind-bogglingly bad. I am particularly frustrated by the choice, a few years ago, to remove the requirement that arguments being passed by reference be marked with an '&' at the call site (as well as on the function parameter receiving the reference). Since in general you don't know until runtime what function is called at a variable call site (e.g. '$f(...)'), you can't tell by looking at such a call which arguments, if any, are being passed by reference!
But, all that said, I think you can write an unsafe library or framework or app in any language. In fact I recall a vulnerability in a widely-used Ruby library in which one of the API functions normally expected some kind of object, but if you passed it a string instead, it would call 'eval' on it for you and use the result. This "helpful" behavior was not documented.
My point is that there's more to educating the masses about writing secure code than just telling them not to use PHP.
Thank you for this! Interestingly, the cases you mention cannot arise in Prolog, because Prolog terms are never evaluated implicitly. They are just terms, and you write predicates that reason over them. In pure Prolog, there is no way to pass something "by reference" either.
If you are determined enough, you can definitely write an unsafe library or app also in Prolog, and security mistakes are also routinely found in Prolog implementations, just as in PHP or Java implementations. The main point is still though that a large class of security issues that easily arise in PHP user programs by one of the ways you mention is far, far less likely to occur in Prolog programs, due to the more direct, symbolic way you reason about data in Prolog, and additional mechanisms such as the mentioned quasiquotations which allow safe embeddings.
" The main point is still though that a large class of security issues that easily arise in PHP user programs by one of the ways you mention is far, far less likely to occur in Prolog programs"
This is true. While we're at it, I think it's worth bringing up that the most powerful use of Prolog is embedding it in a LISP that is "batteries included." One like Racket. That way, one can use a safe, easy-to-analyze, functional style for most of the application, one or more DSL's for the templating (esp HTML), and Prolog operating on LISP structures when Prolog is best thing to handle it with. Alternatively, Shen uses something like Prolog as its type system so you can hand-roll a custom, type system for each component which might include security properties.
Far as good design, web, and security-oriented, the best I've seen is Opa language for doing that plus being productive.
Too bad they moved the backend to Node. Probably to latch onto an ecosystem getting momentum which is a lesson from Worse is Better philosophy. Most IT tech that didn't disappeared into history at some point. I'd have preferred it be Go if one of the new, popular things given its fast, simpler, and safe enough. Hell, if libraries aren't a concern, they can even output safe subset of C with all the checks enabled like Pieter Hintjens did with iMatix DSL's.
Hell, it might be dead. I'll have to email them some time this week to find out what's up. Fortunately, it's open-source so others can pick up where they left off if they want. Or do a clean-slate work with similar capabilities.
Got word from a developer that the project is on hold. He claimed no residual vulnerabilities since it extracts to Node.js which is maintained. He said dome are using it in production. They still welcome contributors.
"I agree that the language is very poorly designed. It seems the designers add any feature that they think will make it easier to write code, regardless of its effects on readability or security. "
That's exactly what they do. It's why it succeeded at original goal, got popular, and is currently a mess for folks who've seen better-designed things. Further, I'll add that it doesn't appear to have been designed at all so much as a hacked together pile of features that were useful to the author at the time then extended over time. Just like C was when I researched it.
It's even worse than the two of you think. It gets worse during the times they tried to "design" some aspect of it. The hashing of names was worth bookmarking:
I had one project on that saved back in the PHP 5 days. What's the current state-of-the-art in static analysis tools for PHP in both commercial and FOSS? And does that sub-field have any that can prove the absence of common, severe errors like Astree Analyzer does for C and SPARK for Ada? The PHP equivalent of severe errors anyway. Especially anything allowing code injection.
"widely-used Ruby library in which one of the API functions normally expected some kind of object, but if you passed it a string instead, it would call 'eval' on it for you and use the result. This "helpful" behavior was not documented."
When eval will happen or whether risky constructs like that are used at all is one of the things that would be on my list of requirements for static analysis tools. I should be able to spot those kinds of issues in one pass before using a library. In theory anyway.
"My point is that there's more to educating the masses about writing secure code than just telling them not to use PHP."
It takes some books and practice. Also, you can get pretty far telling them to use Airship CMS since it was designed for security. I don't know anything else about it, though, since I don't do web apps or PHP.
Wow. Im impressed with how well you handled that reply. (Thumbs up)
Although not interested in pentests, I do collect info on Prolog and logic programming in general as I think they can be a nice cheat on doing specs and code with no mismatch. Specs are the code (sort of). Additionally, verified systems use theorem provers with some using first-order logic. One can do the tool or reference imementation in those. A high-performance Prolog can iterate it quickly or be used before slower, verified provers to knock out bugs faster. It's also interesting in making apps more maintainable maybe.
So, I appreciate the link and offer. I'll definitely read it tonight. I'm sure the capability you describe can knock out web errors despite risks in underlying TCB. In return, I offer you two that are high-performance and high-assurance respectively with a bonus that's practical.
You should seriously check out Mercury if you havent. It's Prolog plus strengths from functional programming. It's also used by Mission Critical IT for business software. If you don't need Prolog's libraries, then it should (Im speculating) be superior based on features and performance alone. Just going with second hand data here cuz Im not a logic programmer. :)
I think sometimes you don't even need the "killer app" when the language itself is "pure tooling" and there is a sufficient demand for tooling in a given market, see TypeScript that basically is "JavaScript with tooling".
For me, what killed it was the fun of making a simple change to your program and due to a single character mistake insert a bug that turns its run time from O(n) to O(e^e^n).
Besides, deep first recursive searches are easy to write and almost never work well in practice, so even the problems that are greatly represented in Prolog either do not get efficient binaries from the existent Prolog compilers or are easy enough to write in another language that little is lost on the transition (often both).
That said, I do think search based programming is underrated. There ought to be some representation for theorem resolvers that is good for general purpose programming. It's just that nobody found it yet.
It's about the ONLY PortableApp that offers any kind of program development capability beyond text editing, that I could tell. No compilers, no interpreters outside of this and a couple of SQLite packages. Anyway, I pulled this one down, fired it up, and... no worky. I got a console, theoretically I could execute commands, but try and access the help or docs, and it bails out with an error, telling me xpce can't be loaded, because load_foreign_library/1 is not defined? At least half the menu commands failed with the same error, closing out the app in the process. Basically, the app is impossible to use.
So, there's my answer, one that can be applied to many otherwise promising languages. Any system looking to gain traction really needs to go out of its way to Just Work; to make itself readily available, easily installable, immediately functional, and with clear documentation right at hand. You can carry on 'til you're blue in the face about lazy programmers unwilling to learn a simple build-and-install process, but with the ready availability of other environments that generally Just Work, there's really no excuse. At least, that's how I feel about it.
But why try the "PortableApps" package of a program that isn't listed on that program's homepage instead of using that program's own executable installer listed on its homepage?
Is it not possible that this "PortableApps" package broke the program?
Please don't take the following as "lazy programmers unwilling to learn a simple build-and-install process": Please consider filing this as an issue with the SWI-Prolog team, to benefit everyone who runs SWI-Prolog as such an app.
Please note that SWI-Prolog is free software and depends on such contributions or at least reports to work reliably on all platforms. Alternatively, there are also several commercial Prolog implementations with professional support to help you in case of difficulties.
I've linked your post to the ##prolog channel on freenode but I'm mostly a n00b so if at some point you want help figuring it out, you'd probably be better joining yourself and giving a more complete report.
You only need a portable app version if it isn't a portable app by default.
If I go grab the python zip for my platform it is already a "portable app" what is there for portable apps to do? The only dev environments I can think of that aren't portable by default are not free, so portable apps wouldn't be able to release them anyway.
Netsil's stream-processor is programmed using Datalog, which is a subset of Prolog.
Our architecture/use-case: At Netsil, stateful packet processing pipelines are written in declarative rules and materialized tables are backed by SQL compatible embedded in-mem DB. Tuples are executed in parallel and parallelism is controlled by specifying context constraints in rules (e.g. packets within same TCP flow should be processed in order). Further, Datalog workflows are distributable by providing "location specifier" in rules -- i.e. Tuples and attributes serialize to protocol buffers and can be sent/received over ZMQ. Also, the materialized tables in Datalog can be made to sync up with Zookeeper, allowing distributed stream processors to do service discovery and so on. It's a pretty sophisticated runtime/compiler, written primarily in C/C++ for optimal performance. The underlying runtime uses a combination of Intel TBB and Boost ASIO.
We are in general big fans of declarative approaches as they have saved us a lot of time, allowing our small team to leapfrog the competition. You can learn more about our architecture here: https://netsil.com/blog/listen-to-your-apis-see-your-apps/
Disclaimer: I am co-founder of Netsil (www.netsil.com).
It’s about 15 or so years ago I tried Prolog in an application for analyzing features of images. I remember two things from that – 1. Even though concurrency is elegantly described through guards, implementation turned out to to be not that easy, 2. You have to bypass the beauty of pure logical statements in practice (for example see the 'Craft of Prolog' by Richard O'Keefe on optimizing Prolog code). Once we get to this level of writing Prolog code, you find other standard languages and libraries more competitive and practical.
When I tried to use SWI prolog for a toy-task, it failed miserably when dealing with facts that contained numeric expressions. I think if a variant of Prolog was paired with an SMT solver (e.g. Z3) it would be much more relevant today.
For example, when dealing with bitemporal data (common in finance) you might have a set of facts with two date range attributes. Lets simplify by saying we have a set of facts each having a start date and end date. Here is some non-working Prolog that could work if there was such a capability.
entity('TimeWarner').
ticker(entity('TimeWarner'), 'TWC', date(1999-01-01), date(2014-04-31)).
ticker(entity('TimeWarner'), 'AOL', date(2014-05-01), date(9999-01-01)).
current_at(ticker(entity(_), _, Start, End), T) :-
T @> Start,
End @> T.
-- find current ticker for Time Warner
current_at(ticker(entity('TimeWarner'), _, _), date(2017-05-29))
-- SWI prolog can not unify the above clause!
(The code above is semi-pseudocode - but I did try and fail to make this work some time ago)
Now, it turns out that this sort of exists already, it's called Answer Set programming. There is one implementation out there [0] - but I didn't feel like dredging up an old research project.
Slightly OT, but half-open intervals are just better for this sort of thing. After all, the change date could have been '2014-05-01T10:05:23.3982Z' instead of '2014-05-01'.
[EDIT:] In case it wasn't clear, I'm suggesting that it would be better to write this:
Would something like this work for you? (Edit: it parse_time/2 is a Swi-Prolog built-in so this won't run in other Prologs. chielk's solution above is best)
Basically, you need to treat the date as an atom and it's better to avoid wrapping up everything in compounds like you're doing in ticker(entity('TimeWarner') etc. That just makes it harder to read and write your queries. You can just pass the argument values in the head of the rule and match them with the facts in the database like I do above.
If you really require the dates to be in yyyy-mm-dd format, the code above becomes slightly more verbose but it's not the end of the world.
chielk's solution is good, but it doesn't have to be as complex. You were on the right track with using @> (the "standard order of terms"), which does the right thing automatically. Here's your fixed definition:
That's a nice, short answer but t's a bit unsafe, isn't it? The standard order of terms doesn't know anything about dates. The above works because compounds are compared recursively on their arguments.
For those less familiar with Prolog, in Prolog terms, a date in a format like 2014-5-1 is a compound term, a predicate -/2 with the operator "-" as a functor and 2 arguments: 2014-5 and 1. The first of those is, again, a compound, also with functor - and two arguments, 2014 and 5. So the entire date is a recursive term.
A comparison predicate, like @>/2, etc, will walk over the arguments of this term and compare them as it goes - but since a "date" is not a Prolog type (only "number" and "atom" really are) it will not treat a term meant as a date in any special manner.
Which, in the end, means that the following queries are all true:
The third query has some merit, but the rest will need some type-checking somewhere above the comparison line, to make sure you're processing dates and not just arbitrary numbers. In that sense, wrapping up dates in a date/1 term, as the OP did, may not be such a bad idea after all - or at the very least one could write a is_date/1 predicate to handle type checking:
?- form_time([2014-5-1], T).
T = datetime(56778, _5804),
_5804 in 0..86399999999999.
This uses CLP(FD) constraints to reason about time in a very general way, usable in all direction. For example, you can express "all times after 2014-5-1" as follows:
?- form_time([after(2014-5-1)], T).
T = datetime(_1304, _1306),
_1304 in 56778..514671,
_1346#=86400000000000*_1304+_1306,
_1346 in 4905619200000000001..44467660799999999999,
_1416#=<_1346+ -1,
_1416 in 4905619200000000000..4905705599999999999,
4905619200000000000+_1484#=_1416,
_1484 in 0..86399999999999,
_1306 in 0..86399999999999.
I'd argue it can't be done. It basically has features that are best implemented as a library. In fact, the many kanrens (including Clojure's core.logic) demonstrate the effectiveness of this approach. Meanwhile, stuff like GPU-accelerated ML have supplanted it in the natural language processing arena, and functional programming has become the paradigm of choice for theorem provers like Agda.
> It basically has features that are best implemented as a library. In fact, the many kanrens
I've never seen one that came close to a real Prolog. Having a library that implements a slow, informally specified tiny subset of Prolog doesn't say anything about the usefulness of real Prolog.
Do you know very much about GPUs? Would but be viable to implement a kanren on top of cuda or something like that? Wouldn't that lend kanren insane performance gains?
not too surprisingly, a lot of logic programs don't parallelize very well at all because of very linear dependencies (control flow).
and some do a lot of largely independent but very regular work that would execute quite well on a simd/vector/smt array.
people have come up with some tricks to map control flow into simd (like some really cool parser tricks), but i think in general those have regimes where they have sub-serial performance.
so maybe? if you had the magic compiler? or you provided some manual annotation support? or a robust ffi?
for sql, which has a much more limited footprint, there's been some cool vectorization work.
Prolog never had the SQL moment. SQL had IBM, Oracle, and relational databases to back it up and propel it into popularity. Prolog just never had that application. I also believe that the Japanese Fifth Generation Project's failure did a bit to harm the idea of a mainstream Prolog. I was very interested when Borland released Turbo Prolog, but it didn't quite last that long. It also, in its early days, suffered from a bit of Smalltalk vendor syndrome.
..and, sadly, it didn't look like C
// strangely Prolog is listed as a spelling error by Firefox...
Prolog has a beautiful, powerful idea at its core: Resolution. Datalog shines because it doesn't try to overreach and focuses on making this core nimble and expressive in a domain excellently suited to resolution. Prolog tries to be general purpose and is all the worse for it.
Eventually the ideas in Prolog will make their way into a general purpose language where the relationship between the logical components and the algorithmic components of a program is harmonious instead of a constant conflict.
This is opinion but I think the problems you can express well with a language like prolog are quite special. The problems programmers need to solve every day are better expressed by a language that's closer to human languages. How would you write in prolog that you want it to serve a website? Of course this is possible but probably not in an idiomatic way. Furthermore like with functional languages the performance of the program is harder to predict.
This is the central problem, I think. There are problems which are traditionally solved using different approach, not a Prolog-like, and we're mostly comfortable with this approach unlike with others.
A successful general-purpose language has to solve all - or at least all important - problems sufficiently well. In communications to each other we use natural languages, which conveniently allow to omit hard parts if we wish so, so they are rather easily bendable for everything. With precise languages we so far have to either hop paradigms or use clunky detailing. We either need to look to everything through a Prolog (or other language) lens or keep using a variety of tools.
It's difficult to get started with Prolog beyond toy examples. Prolog was originally designed for creating linguistic models for use in NLP. Even for this original purpose it isn't exactly easy to use.
When it comes to mundane tasks such as opening a file and reading its contents as a string or accessing databases, things get even more difficult. Technically, this is all possible with Prolog, too. It's just not exactly fun to do so.
First of all, I fully agree that it is hard to start with Prolog initially. However, let us take a look at these particular examples:
As to opening a file and reading its contents as a string:
I find it best to use Ulrich Neumerkel's library(pio) to accomplish this task. Importantly, this lets you apply a DCG to a file in a pure way. I start with a DCG that simply describes a list of characters:
I save this in content.pl, just to have a file to try. I can now apply this DCG to the file contents with phrase_from_file/2:
?- phrase_from_file(content(Cs), 'content.pl').
Cs = [c, o, n, t, e, n, t, '(', '['|...] .
Thus, I have read the file contents as a list of characters, which I can easily convert to anything I want with other predicates.
As to accessing databases: That's quite straight-forward too, in particular if we take into account the following: If you are really using Prolog professionally, then typically Prolog is the database. You simply assert facts, and retrieve them by querying the built-in Prolog database.
Personally, I find Prolog queries much more convenient and also more expressive than SQL, and great fun too.
> If you are really using Prolog professionally, then typically Prolog is the database. You simply assert facts, and retrieve them by querying the built-in Prolog database.
That was my personal aha moment with Prolog when I realised that Prolog statements are quite similar to SQL queries in that you declaratively define the results you expect instead of the exact directions describing how to arrive at those results.
The problem is "Prolog is the database" is that there isn't a great solution for persisting that database, which is usually what people mean when they talk about databases. Sure, for SWI-Prolog there's Persistency but that a) doesn't help you if your data doesn't fit in memory, and b) doesn't give a lot of the same guarantees as something like Postgres about durability through sudden failures (as far as I can tell, the docs sure don't mention it). It's closer to SQLite than a database you'd use for a production web app.
Yes, I agree with this. In SWI-Prolog, in addition to library(persistency), there are already some published results on transaction support for the internal database, which will give you some features that facilitate such use cases. This is already available and in fact also running for production web apps, but currently only for the RDF database. It is true that if you need more advanced functionality that dedicated database systems readily give you, then you have to wait until such features become available in SWI-Prolog, pay for their implementation, or resort to using an external database for which there often are bindings.
Prolog is used in IBM Watson, Datalog, and AFAIK in Windows OS.
Prolog is not so popular for general purpose computing since: compilers are inconsistent, compatibility problems, difficult debugging, high maintenance costs, few experts, steep learning curves (my professor joked that the more computer science the student is exposed to, the harder is the mental switch to Prolog).
Prolog remains great for education on logic, NLP parsers, recursion.
As someone who has quite the extensive experience with Prolog, I can best summarize that there are some types of programming problems at which it excels: deduction, ordering, discrete constraint problems, etc; and at everything else an imperative language is often simpler to use to achieve the same end result.
I'm no expert, but I've tinkered with prolog in the past and with Clojure's core.logic more recently and in my opinion it's because the part of my problems that would be a good fit for prolog make up only a small part of the solutions that I write to solve them. For this reason, something like core.logic is much more interesting and useful because it means I can express the part that is well suited to logic/constraint programming in "prolog" and the parts that are not well suited, can be written in another language more suited to those tasks.
For example, I can write some code in clojure, that, for example, implements a UI which then calls core.logic to do some processing, which then calls some clojure to pull the logic data from a database. If I wanted to use prolog, I'd have to do something like: (other language -> ffi -> prolog -> ffi -> other language) which is usually too much effort for me to bother.
As to the first question: There are several reasons for this. One is rather inherent and can be understood by considering the following analogy:
Java, C, and many other programming languages are like chess: There are many syntactic rules, and by learning them, you already obtain a rough overview of what you can do in principle. You try out these constructs, and get a sense that you have accomplished something, even if it is rather worthless, and more complex tasks are extremely hard to carry out successfully in these languages.
Prolog is more like Go: The syntax is very simple, and there is essentially only a single language element, the logical rule. This means that even if you know, syntactically and semantically, almost everything about the language, you have no idea what to do at first. This can be rather frustrating. From this, beginners easily arrive at the misguided conclusion that the language is useless, or restricted to very specific applications. But it only means they have not grasped its true power and flexibility! Getting to the core of Prolog is hard, and requires systematic guidance.
This inherent difficulty is frequently compounded by a rather ineffective and outdated didactic approach which, at its worst, stresses difficult and mostly superseded procedural aspects over more important declarative principles and more modern solutions like constraints. This easily gives the misguided impression that the language is rather imperative and limited in nature, and again causes many students to dismiss it due to their wrong impressions.
A third reason is found in the implementational complexity: From a user's perspective, a major attraction of Prolog is its ease of use due to the syntactic simplicity, powerful implicit search mechanism, generality of predicates etc. which are features that are rather specific to logic programming languages. The complexity of all this is shifted to the implementation level: In order to make all this both powerful and efficient, the implementation must do many things for you. This means you need, among other things and in no particular order: an efficient garbage collector, JIT indexing, a fitting virtual machine architecture, a fast implementation of unbounded integers, rational numbers, good exception handling, ISO compliance, many goodies like tabling, an efficient implementation of constraints over integers, Boolean variables, Herbrand terms etc. Most of these topics are even now still subject of active research in the logic programming community, with different advantages and trade-offs. Implementing an efficient Prolog system is a project that easily takes 30 to 40 years. In fact, we are only now getting to the point where systems become sufficiently robust and feature-rich to run complex client/server applications for months and years. In such complexities, you find the answer why Prolog isn't more popular yet. It has simply taken a few decades to implement all this in satisfactory ways, and this work is still ongoing. In my view, Prolog is now becoming interesting.
To the second point, Prolog already is a great general-purpose language. You can use it for almost all applications that are currently written in Java and Python, for example. Of course, there are always some features that are worth adding on top or via extensions, and certain tasks would benefit from this. For example, you can add extensions for type checking, and for fast arrays. Various Prolog implementations are already experimenting with such extensions. Many extensions can in fact be implemented via term and goal expansion, a facility that is analogous to macros in Lisp, or via simple reasoning over given programs.
Good post, in particular splitting out the differences between explicit syntax vs expressive syntax. I myself am quite fond of expressive formal systems, but equivalently useful explicit systems are much easier to pick up even if their ruleset is larger. One has hack-y corner cases but generally keeps you on the rails, the other has the hacks as the rails and expects you to not fuck up where you're going. Or something like that.
*) machine and assembly languages: simple syntax, simple semantics
*) procedural languages: complex syntax, AST not available, many language constructs
*) Smalltalk, LISP, Prolog: AST available, few language constructs
It's definitely worth a look! Richard O'Keefe, one of the most highly regarded and accomplished Prolog programmers, even wrote his own Smalltalk implementation too:
One of the worst things I found during my prolog courses was that authors seemed to conflate "shorten" and "simplify". We had whole exercises dedicated to taking a readable chunk of code and turning it into some godawful one liner.
You can answer this question yourself: simply figure out how you'd implement certain types of programs in Prolog. For example: an operating system, a Unix command-line tool, a video game, etc. You'll quickly shake out deficiencies.
What's wrong with writing Unix command-line tools in Prolog? I'm asking because the "deficiencies" aren't clear to me, although I have written command-line tools in Prolog.
These are good questions! As a simple control, I have asked myself these questions with Java in mind, which enjoys notable popularity.
Would I write an operating system in Java? No. A Unix command-line too? No. A video game? Probably not.
In my view, this casts some doubt on the test's adequacy to answer the initial question. In the concrete case of Java, I think marketing and other influences also played important roles. It could be possible to apply these advantages to Prolog too.
You are not asking the same questions. The question is how would, not would. While I'll admit writing an operating system in Java is non-trivial (thinking of kernel-level programming here), writing a command-line tool or a video game is pretty simple. You main method receives command line arguments, so that part is covered, and for games you do pretty much what you would do in any other language.
Thank you! Indeed, the question was "how" would I do it, so let us consider it:
Operating system in Prolog: Non-trivial, i.e., like in Java. Command-line tool: Pretty simple: My main predicate receives command line arguments. There are countless such examples already, included in the SWI-Prolog distribution (for example). And, as you say, for games I would do pretty much the same as in any language also in Prolog.
In my view, this still leaves the same doubt about these questions: Can we distinguish Java from Prolog in any way by answering them?
I have used Prolog a few times in a profession setting. When you are using it for the right problem, it is perfect.
To use it in a more general purpose sense, I think you need a couple of things.
First, you need some really great training and or books showing practical examples as well as how to overcome common issues with performance.
Second, you need more people contributing the some of the open source options like SWI Prolog.
Third, I think you need more ready to use bindings for the popular languages out there. SWI Prolog provides a C interface, but if you had an interface to say Node, Go, Rust etc that was simple to install with some good examples, you could reach more people.
I don't know about general-purpose, but I've seen Prolog implementations for specific logic problems. Most recently, chalk [1], a Prolog interpreter designed to be used in the Rust compiler's trait system. So I wouldn't say it's not popular, but the areas it's used in are probably not what most programmers deal with every day.
I would say the problem is pretty similar to why isn't Haskell more popular?
both have a different/specific (as in non-mainstream) thinking way, and it is not easy to switch from common programming languages to these. and since it isn't easy, most people don't go deeper on them
from a company point of view: if it's hard to find a good Prolog/haskell developer, then they will be more expensive, so they stick with the common Java/C/C#/Python/Ruby/JS stack
I tried programming in prolog and read some introductionairy books. However, no book talks about how you would define more complex 'types' objects. How would you model the following domain in prolog:
Comment:
Attributes: text, points
User:
Attributes: name, emailAdress(as a struct of first part, domain, top-level-domain)
Admin(a special user):
additional Attributes: set of rights(can delete, can hide, can modify)
Finally, there is an n-1 assocation between Comment and User and I want to make some queries about this domain.
There are several ways to do it. One straight-forward way is as follows:
We can represent comments by facts like:
comment_id_user_text_points(3, 1, 'hello!', []).
I do not know what exactly you mean by "points", so I have simply supplied an empty list (fourth argument) in this case. Note that there is an ID for the comment, and an ID for the user that posted the comment. Therefore, we say that this is a relation between users and comments and their text and points, usable in all directions.
Each user can likewise be represented as follows, relating a unique ID to a name and e-mail address:
and obtain, on backtracking, all comments for any user.
This is all completely analogous to how you would represent such data in any database system. It's more convenient in Prolog though for several reasons.
Wow, modelling it like in a relational DB is a great idea! Thanks. Your trick of adding names of the properties like "_id_user_text" is neat. In the past, I tried a similar solution like but I would always forget which property is at which position.
How would you make sure that the rights of admins are only delete, hide, and modify (and not e.g. walk, talk, chalk) ?
If I use
Moreover, how would you then 'create' objects without adding them with asserta. Assume that all the user, admins, comments are written to a text file which should be queried. I can easily read them into some nested compounds with DCGs. However, I would like to create objects(e.g.like Java's new) to check that they adhere to the contraints of my model.
The naming trick makes clear what each position means: The order of arguments follows that of the name's components. So, if the name contains id_user_text, then the arguments are also in this order: ID of comment, ID of user, and the text. This is a simple mnemonic technique I use for naming important predicates.
As to your first question, we are now talking about (database) integrity constraints. So I would first state which rights are admissible at all by clearly defining what we consider a right. For example, in this concrete case:
right(delete).
right(hide).
right(modify).
Next, I describe the situation that an admin was inadvertently assigned a "bad" (i.e., not actually existing) right:
Now, I can ask Prolog: Are there any bad rights assigned? I do this by simply posting the most general query about this relation:
?- user_bad_right(ID, R).
false.
I conclude: No, there are no bad rights assigned.
As for creating objects: This is a good case of using the dynamic database, i.e., predicates like assert/1 and assertz/1. The database is very good for frequently retrieving information, but not good for frequently updating the information. This is a fitting situation: Comments are presumably only posted once, and in that case, you can simply add such facts to the database.
But you can of course also make all this explicit, and first construct a term of the form comment_id_user_text_points(3, 1, 'hello!', []), and then reason about such terms (instead of reasoning about the asserted facts). Note that this term looks exactly like the fact syntactically, due to the homoiconic nature of Prolog. Therefore, you have many ways to reason about your data. You can even write all such terms to an external file, and simply consult all facts (and even rules) it contains by invoking consult/1 dynamically.
Since Prolog is now gaining more traction, I am adding to my favourites all Prolog links which I consider noteworthy or recommended reading. Please see my profile for more information, and also for future updates.
Here's a shot. As you don't declare relations in Prolog the way you would, say, write SQL CREATE statements, I'll just show the data model on concrete data.
You would probably realize pretty soon if you try writing anything meaningful with Prolog.
In my experience Prolog is conceptually the coolest, but practically the worst when trying to get anything done.
Basically, writing a program in Prolog is like solving a puzzle. Nobody wants to solve an additional "puzzle" on top of their already existing problem they set out to solve by programming. (Unless they're doing it for fun)
I attended, but can't claim much comprehension of, Chris Martens' talk on logic programming at Strange Loop 2013. If you're interested in the world beyond Prolog, worth a watch.
Posits that the huge hype put into it by the Japanese Fifth Generation Computing Project, which it failed to live up to, essentially killed off interest in the language, which it never recovered from.
Posits instead that much of the low-hanging declarative fruit has been picked off by other, more specialized languages, ranging from SQL to production-rule systems to even LINQ, so Prolog no longer is the default go-to declarative programming language.
Having tried Prolog for a problem that it's relatively well-suited for, I found that all the advantages Prolog gives fall away very quickly. There's a lot of finicking around with weird organization of your code just to get things to work, despite being "declarative." For example, trying to rely on libraries to solve parts of the problem for you can be really strange because you can be 99% of the way to solving your problem, but not being able to edit/inline that code will create problems. As an example, trying to use a sorting function on an uninitialized list can't really do anything to help you, but if you inline that same code it can fix it because you're declaring properties of the list in-place.
>> trying to use a sorting function on an uninitialized list
That would give you an error, and rightly so. You can't really expect to sort what's not there. Or am I misunderstanding your comment somehow? I don't quite understand what you mean by "inlining" and how that can help sort a list of no-values?
The way Prolog is "declarative" is that it bridges the gaps between assertions about the properties of a result and actions used to achieve those properties. For example,
p(X) :- sorted(X).
is saying both (the imperative) "p(X) = sort(X)" and (the assertive) "a sorted X has property p(X)". To solve for P(X), one only needs to find an X such that X is sorted. This makes it difficult to say: "I expect input that, when sorted, has this property" because there is no clear meaning when X is uninitialized. Yet if you expand the meaning you can become "more explicit" about it and convince Prolog that what you're saying is true.
Here, I am using the CLP(FD) constraint (#=<)/2 that works correctly in all directions, whether or not its arguments are already instantiated to concrete integers. Such constraints are available in all widely used Prolog systems. You can try the above in GNU Prolog, for example.
Now the point:
Exactly as you say, the predicate works for concrete lists of integers that are already given:
And moreover, it also works if the integers are not given, for example:
?- sorted([X,Y,Z]).
Z#>=X,
Y#>=X,
Z#>=Y.
We can even ask for example:
?- sorted([X,2,3]).
X in inf..2.
This means that if this relation holds, than X is at most 2.
We can obtain concrete solutions with enumeration predicates. For example:
?- Vs = [X,Y,Z], sorted(Vs), Vs ins 1..3, label(Vs).
Vs = [1, 1, 1], X = Y, Y = Z, Z = 1 ;
Vs = [1, 1, 2], X = Y, Y = 1, Z = 2 ;
Vs = [1, 1, 3], X = Y, Y = 1, Z = 3 ;
Vs = [1, 2, 2], X = 1, Y = Z, Z = 2 .
And maybe most strikingly, we can also use this in the most general sense, where we ask: Is there any solution whatsoever? The system generates answers in this case:
?- sorted(Ls).
Ls = [] ;
Ls = [_28] ;
Ls = [_170, _176],
_176#>=_170 ;
Ls = [_1194, _1200, _1206],
_1206#>=_1194,
_1200#>=_1194,
_1206#>=_1200 .
Note that all these considerations lead us to the conclusion that "sorted" is a rather bad name for the relation, since it implies that "something has been sorted" and thus encourages a rather imperative view. A better name would be ascending/1, denoting a relation that is true iff its argument is a list of ascending integers, whether or not they are already known.
>> (the assertive) "a sorted X has property p(X)".
Sure, but X is a variable, implicitly universally quantified, and p(X) <- sorted(X) is only going to be true for some values of X. Unless you apply some stricter constraints, for example, as indicated below, you can't really know for which values the relation is true.
Are you saying that, in a purely declarative context, p(X) :- sorted(X) is always true? That depends entirely on the definition of sorted/1. For instance, the following is trivially always false:
p(X):- sorted(X).
sorted(X):- false.
And the following always true:
p(X):- sorted(X).
sorted(X):- true.
Normally, sorting predicats will do something more interesting- including declaring properties of X that would probably answer your question.
I'm still a bit unsure about what you are trying to say and what you mean with inlining properties etc so apologies if I haven't addressed your concerns.
In my opinion there are mainly three reasons why Prolog is not popular.
The reason is that declarative languages in general are not as immediate as imperative languages. Nothing intrinsic about declarative languages themselves, this has to do mostly with the fact that we are taught and we are exposed to imperative languages first and then we hear about other more "exotic" paradigms.
Second issue. In a sense, the elegance of the language has been the biggest weakness. The academics loved to play with conceptual matters. A lot of effort went into papers on semantics and mapping different types of reasoning but not much into tools, IDEs, or compilers. The community didn't make enough libraries, built the right abstractions and software engineering practice. Every time you start a project in Prolog you are starting from scratch.
Also, the elegance of the language is the reason a lot of people approaching Prolog get quite demanding. "It's logic programming so why do we have to use a cut?". Yet most programmers have no problems with the quirks of C++ or Java.
The third one is Prolog hasn't found it's excellence area. C, Go, Scala, Java all have their on strengths and scenarios where they are the best candidates. Prolog would make in theory an excellent candidate for representing complex domains based on rules, covering a module of a larger piece of software. It would be perfect to represent the rules of a board game or the knowledge of a chatbot, but for a number of reason that's no happening. How does the reasoner scale with larger datasets? Will it be hard to manage? Is there an example of something similar being attempted?
In my very personal opinion, as a community we should learn from these lessons, take the best bits of Prolog and make something new.
At Grakn.ai (https://grakn.ai/) we are working on a graph database that uses an inference layer that is based on Prolog's resolution, maybe worth having a look. The idea there is that Prolog maybe shouldn't become a great general-purpose language but its bests parts must be used as a base for the next advances in knowledge representation and reasoning.
In that thread, I show how you can express the sample relation in Prolog. Maybe you can go into more detail, either here or in that thread, on the advantages of Grakn over Prolog for such cases?
Someone from my team replied. Thanks for taking the time to start the discussion on the comparison, we have a few people with a computational logic background here and we plan to publish a blog post on the topic soon!
Perfect, I am looking forward to reading the blog post!
I hope it will also mention Prolog as an important influence.
As to your second issue above: Please note that there are many collections and even entire books that describe antipatterns of Java, C++ and many other programming languages. In my experience, programmers from these communities quickly learn to avoid these antipatterns. In the Prolog community, this happens more slowly, but it does happen too. There are very good reasons to avoid !/0 and other impure constructs in logic programs. In my view, the key issue is to find and teach better constructs that should be used instead, and the best alternative language constructs are still waiting to be discovered.
In this spirit, I fully agree with you that we should keep Prolog's best aspects, and extend them as far as we can into the directions we need.
Perhaps what is needed is a Prolog-derivative language which is nicer to use. There is a number of declarative languages available, but my personal favourite would be Picat http://picat-lang.org/ A disadvantage of Picat would be though that unlike SWI Prolog, it doesn't have any sort of package management system, and doesn't have a general means of using FFI to call into C libraries.
Picat is a very interesting development. Note though that it, at least currently, omits some key features of Prolog such as the homoiconic syntax, and definite clause grammars (DCGs).
I couldn't find much use in Prolog when I tried it. At first it was neat how you could express simple problems, then I learned how one used ! for everything else. It felt like instead of working on a solution, I had to work on what the solution isn't.
It's a very specialized system in my view, so there is no hope of it ever becoming general-purpose. But maybe that's because I don't know enough of Prolog.
if_/3 and other declarative predicates like dif/2 are more general alternatives and likely good solutions in the cases you mention. They are still quite recent, at least if we ignore the fact that dif/2 was even available in the very first Prolog system, sometimes called Prolog 0.
Prolog is already a general-purpose language. So is any Turing complete
language, by definition. Whether it's a "great" such language or not is kind
of a personal tastes thing.
As to why it's not more popular, I've thought about this very ofen and I don't
have an answer. What I know for sure is it's never going to become more
popular until people move on from that silly soundbite about its "general
purpose"-ness, which never made any sense to begin with.
People program "general purpose" stuff in languages that are much worse for
"programming in the large" than Prolog. Most of the big operating systems are
written in C, large swathes of game code is in some assembly language or
other, about 60% of enterprise code is in Java and most supercomputing code is
in FORTRAN fer chrissake. Not to mention, all of the internet is in
javascript, a language that was originally meant just for writing small
snippets of code to manage buttons and text fields and stuff. You're not going
to tell me that javascript is "general purpose"?
Prolog is already a general-purpose language. All you need to do is have a look at the library section in the Swi-Prolog documentation. Besides the usual suspects (constraint logic, tabling, lambdas and such and of course parsing all possible text-based formats ever in time dt) we find a bunch of diverse libraries:
An http package for all your client/server needs [1]
A library for opening web-pages in a browser in a system-agnostic manner [2]
A library for command-line parsing [3]
An RDF parser and a semantic web library
A package manager [5]
A random numbers generation library
A library for manipulating the Windows registry [7]
A library for solving linear programming problems [8]
A thread pool management library [9]
And a whole lot of support for a bunch of other stuff like corouting, multithreaded applications, a profiler, terminal control, an ODBC interface, an interface to Protocol Buffers, bindings to zlib, GNU readline, and so on and so forth.
The question asked is "what would it take for Prolog to become a great general-purpose language". That sounds like they think it isn't. If that wasn't meant OK, but that's definitely what it reads like.
I meant to say "what would it take for people to think of Prolog as a great general-purpose language?".
I have never used Prolog for anything serious, but I think it has great potential and I really want to like it. It almost looks like the perfect programming model, and I want to understand why it's not.
What you're asking is something that the logic programming community has asked itself very often, but it's very hard to answer with any certainty.
One thing that should be noted is that Prolog was very popular, for a brief period of time, in the 1980's. For instance, check out this year's TIOBE index report:
If you scroll down to the section titled "Very Long Term History" you'll see Prolog listed as the 3d most popular language in 1987 (behind Lisp in second place and C in first, and before C++ in fourth). By 1992, it had dropped to 14th place and then it was pretty much all downhill from there.
As a personal anecdote, I've read a number of Prolog texbtooks from the late '80s and early '90s that begin with saying that it is very important to learn Prolog because it is sure to become a very popular language in the future.
In other words, Prolog did have its time in the sun. But then it fell from grace.
As far as I can tell, the most likely narrative to explain this meteoric change in fortunes is the one that pins the blame on the association of Prolog and logic programming to the Japanese Fifth Generation Computer project. This (theoretical) explanation of the rise and fall in popularity of Prolog is proposed here:
In short, how this story goes is that, when Japan chose to use logic programming for its Fifth Generation Computer project, which was seen as potentially extremely disruptive by the West, companies and academics in Europe and the USA suddendly took a great interest in Prolog, thinking that the Japanese must know something they didn't. Then, when the Japanese project flopped, it took Prolog with it.
I stress again it's just a theory, but, to me in any case, it's at least very plausible.
If you're into Clojure et al or at least datomic you've probably used some subset of prolog (qua datalog).
Clojure also brings with it logic and relational programming which is likely the go-to choice of the Clojurist for expressing work-flow or permissions management type problems. Not exactly Prolog but it's the same difference.
I once asked a retired Japanese computer scientist why the 5th generation project had failed. He said: "What do you mean 'failed'? All my former colleagues who had worked on this project went on to become full professors!"
Also, I once attended a talk by a Japanese researcher who was intimately involved in the project. One significant phenomenon at that time was that commodity hardware was progressing much faster than had been anticipated, in the end eclipsing the specialized designs that were being worked on.
There are other reasons too, and here I can only say that Prolog as it is today had very little to do with the outcome. In fact I think this would be a great follow-up question to the present discussion!
"One significant phenomenon at that time was that commodity hardware was progressing much faster than had been anticipated, in the end eclipsing the specialized designs that were being worked on."
This is exactly what happened. A slow language like Prolog on OK hardware that accelerates it can't compete with a fast language like (not-Prolog) on highly-custom, top-of-the-line hardware that accelerates it. It can't in the general case and plenty times not in the special case. Now, that was when Moore's Law was in full swing. There's potential now for that pendulum to swing in reverse for something like this.
I've wanted to use Prolog in JavaScript and Java projects before but I could never find robust Prolog implementations for either. I wouldn't want to use Prolog for a GUI or be tied to binary implementations of it but it has its uses.
Because its only first order so its really tedious. So you can say that if a > b and b >c then a > c, but you can't say if a op b and b op c implies a op c then op is transitive.
This is not the case: Prolog has several higher order constructs like the call/N family of predicates, maplist/[3,4] and foldl/N.
Your particular example of transitive relations is one of the most elementary examples that are typically solved in basic Prolog courses as exercises, when defining reachability: B is reachable from A if there is an arc between A and B. Transitivity can easily be defined by a rule, as you correctly mention. For instance, let us take your example:
fact(a > b).
fact(b > c).
transitive(A, B) :- fact(F), F =.. [_,A,B].
transitive(A, C) :- fact(F), F =.. [_,A,B], transitive(B, C).
Here are all solutions:
?- transitive(X, Y).
X = a,
Y = b ;
X = b,
Y = c ;
X = a,
Y = c ;
false.
Note that the operator ">" is not mentioned in the definition of transitive/2. Instead, I am using the meta-predicate (=..)/2 to reason about all functors that can arise, making this a quite general definition of transitive relations.
We can also reason explicitly about the relation, by making the functor (which may, or may not be defined as an operator) available as a predicate argument:
transitive(Op, A, B) :- fact(F), F =.. [Op,A,B].
transitive(Op, A, C) :- fact(F), F =.. [Op,A,B], transitive(Op, B, C).
Now we can ask queries like:
?- transitive(Op, X, c).
Op = (>),
X = b ;
Op = (>),
X = a ;
false.
and also in the other direction:
?- transitive(Op, a, Y).
Op = (>),
Y = b ;
Op = (>),
Y = c ;
false.
And we can also ask in the most general way possible, where all arguments are fresh variables.
I've never messed around with it beyond toy problems, but I recall Norvig saying in one of his books that it was trivial to code up NP hard problems in Prolog and other logic constraint languages without realizing that you did so.
SAT is about a very special case of logic, namely propositional logic, where every variable stands for exactly one of only two possible truth values. Finding satisfying assignments in propositional logic is indeed already a computationally hard task, and it is good that we have efficient SAT solvers for such use cases, allowing us to solve a large variety of combinatorial problems nowadays.
However, Prolog lets you tackle not only propositional logic, but tasks that go far beyond this, belonging to a logic called classical first order logic, of which propositional logic is only a subset. In first order logic, we reason about predicates between terms, and this lets you tackle much more complex tasks, far beyond what SAT/SMT solvers can solve for you.
In fact, first order logic is so powerful that it lets you describe everything you can in principle perform with a computer. It lets you describe how a compiler works, for example. Or how numerical integration works. And all other computations you have ever seen computers perform.
In short, Prolog is a programming language, and can in fact even be used to implement a SAT solver, which is impossible to do with just a SAT solver. Many Prolog implementations even ship with a SAT solver as one of their libraries.
There are also even higher-order constructs in Prolog, such as predicates that let you invoke other predicates, making programming in Prolog very expressive and convenient.
> first order logic is so powerful that it lets you describe everything you can in principle perform with a computer
Not quite sure what you're referring to here, but induction schemata seem to be a notable exception. There's a reason higher-order logic is often preferred for software verification.
I guess FOL is adequate if you're allowed to have an infinite number of axioms, but that doesn't seem very satisfying (pun intended).
I am referring here to the fact that first-order logic is sufficiently expressive to describe how a Turing machine works, making FOL Turing-complete and thus, as a special case, making every computation you can carry out on a computer expressible in FOL, as a first-order formula that is satisfiable iff the TM accepts the input.
SMT solvers typically only support decidable theories, and since first-order logic is not decidable (only semi-decidable), such solvers are not as expressive as Prolog.
In Prolog, the more natural approach that closely corresponds to SMT is simply implementing the theory as a constraint solver. For example, check out CLP(FD) and CLP(Q) for constraint solvers over integers and rational numbers, respectively. They let you formulate statements over these theories, and search for solutions. Note though that solving equations over the second-order theory Z (i.e., integers) is not decidable either (only semi-decidable), and so you may search indefinitely if there is no solution.
Importantly, constraints over these theories blend in completely seamlessly into Prolog, since they are simply available as predicates. For example, we can write:
?- A^N + B^N #= C^N,
N #> 2,
[A,B,C] ins 1..sup.
This expresses Fermat's Last Theorem in terms of CLP(FD). A constraint solver with perfect propagation (which, as we know, cannot exist for the integers) would deduce that this conjunction of constraints cannot hold.
Answer-set programming (ASP) is another direction to go in w.r.t. this relationship. It takes a Prolog-like semantics (and syntax), but rebases the solving process on top of a solver-style backend that shares some general similarities with SMT/SAT-style propositional solvers. The semantics of ASP were initially arrived at as one of several attempts to give a conventional logical semantics to Prolog. Prolog's semantics from the perspective of traditional logic are a bit obscure, because it's defined in a somewhat imperative manner as "whatever SLDNF gives you", which includes things like statement order being significant (queries might terminate under one ordering and not under another, which is not something you find in logical semantics).
ASP is based on one of those Prolog-semantic proposals, the "stable-model semantics", which competed with other proposals like the "well-founded semantics". Although these are first-order in principle, existing practical tools only implement propositional solvers. ASP systems still take a Prolog-like input language that looks first-order, but they work by first "grounding" the first-order formulae to a propositional representation, and then solving them. If you make suitable assumptions about finite domains etc. this has the same expressivity, but sometimes causes blow-up (other times it causes surprisingly fast-running programs, though).
The concept works when I can declare what I want to be done, and the system does it - and when that happens, Prolog is great, the language is great for declaring what I want to be done.
However, often it happens that the system does it in a way that's somehow horribly inefficient and makes it totally unusable. And then I have to redeclare my requirements in a slightly different way to nudge the system into doing it differently - and this is much harder, then I have to worry about much more moving parts than just my code.
Also, the language is not really well suited for that; if I have to specify how exactly in which order the calculations need to be made, then imperative languages are a much better tool. I'm throwing away all the advantages of Prolog if I have to do this all the time - and in practice I do.
Haskell has a bit of similar problems (though generally not with unexpected speed complexity but unexpected memory complexity through laziness and thunks), but Prolog is much worse in that regard.