The concept works when I can declare what I want to be done, and the system does it - and when that happens, Prolog is great, the language is great for declaring what I want to be done.
However, often it happens that the system does it in a way that's somehow horribly inefficient and makes it totally unusable. And then I have to redeclare my requirements in a slightly different way to nudge the system into doing it differently - and this is much harder, then I have to worry about much more moving parts than just my code.
Also, the language is not really well suited for that; if I have to specify how exactly in which order the calculations need to be made, then imperative languages are a much better tool. I'm throwing away all the advantages of Prolog if I have to do this all the time - and in practice I do.
Haskell has a bit of similar problems (though generally not with unexpected speed complexity but unexpected memory complexity through laziness and thunks), but Prolog is much worse in that regard.
I'd say... sufficiently smart. We've been writing SQL for decades without such complaints and it works rather well in practice. Sometimes you have to introspect your queries and interact with the underlying interpreter to understand the performance constraints but it is much better than the alternative! Imagine having to write every query as a procedure... what a headache and the duplication!
Although learning to write well-performing queries does, I think, take some understanding of the underlying theory driving relational databases and SQL... which can be true for Prolog as well.
I'm not really sure why Prolog isn't more popular. Any time I've written declarative DSLs for a solver engine it always feels like I'm re-inventing an under-specified sub-set of Prolog.
After all the calculation of computer programs is the syntactic manipulation of predicates, is it not? Maybe if we all started with first-order logic and predicate calculus a language like Prolog would appear to be more practical. I think for a majority of programmers however it can seem a bit alien as we're trained to think in terms of procedures and steps rather than invariants and predicates.
Speak for yourself. My experience of SQL is that it requires careful tuning whenever the numbers involved grow beyond the trivial. Query plans need inspecting, and often indexes aren't used appropriately. Queries are written and rewritten several different ways to encourage the database to choose one plan or another.
If your use of a database is limited to key-value lookups and object graph navigation with an ORM, with <100 values on any given edge, you'll have a fine time, I'm sure. Not everybody uses databases that could trivially be replaced with an object graph.
For more complex operations, I'd like to be able to write in the form of a plan with a graph of data flow directly. Scan this table, filter against this index, sort these two data sources and do an merge anti-join, etc. Today, I need to rewrite the query while knowing the planner well enough to foresee how it's going to implement my query, gradually homing in on my target through a series of indirect inspections.
I use PostgreSQL quite heavily at present and do not use ORMs. I'd guess, probably optimistically, that 80% of the non-trivial queries I write are easily optimized by the query planner and require no further thought from me. In my experience these aren't the majority of the queries I write. The final 20% of these queries are the hard ones that, as you say, require some work and re-writing to get the query planner to choose an appropriate plan.
At least I find the tooling in PostgreSQL to be more than ample to assist in optimization tasks.
The spirit of my example of SQL is that we already use declarative programming languages in mainstream systems. Even with recursive queries and common table expressions I think, and I may be wrong, that Prolog is probably more general-purpose and expressive than SQL. And yet it's not nearly as popular for some reason.
I mean, you technically can write all of application functionality in many dialects of SQL, but it's generally considered a bad idea to do so for obvious reasons - SQL is good for some tasks, but for everything else other languages are preferred.
In a similar manner, even in tasks ideally suited for Prolog's strengths (e.g. reasoning systems) that part of the task will be relatively small - in any practical system, the "boring" code of integrating everything with everything else, handling a graphical UI, mangling data from one format to another, wrapping it in a REST web API and god knows what else will be at least 3-10 times larger than the core part of that task. I recall an old project where we had the core algorithms in Prolog, but the surrounding app in Java - the fact that the Java code was much more than 20 times bigger wasn't a verbosity issue, it actually did 10 times more stuff; although it was quite boilerplate, it all was needed and it was clearly simpler/faster/cheaper to make it in Java back then rather than do everything in Prolog. It's not sufficient for a language to make the hard (or domain specific) things possible, it also needs to make all the easy things easy, and Prolog really does not.
So the language either needs to be truly general purpose (and "more general-purpose than SQL" isn't sufficient) or easily integrated with other languages. There are successful examples - for example, tensorflow->python, where again the actual ML part for anything more than proof-of-concept is much, much smaller than the surrounding general purpose python code; or SQL, which is well integrated into other languages.
In another post you mention "I've written declarative DSLs for a solver engine" - why it wasn't trivial (compared to writing a new DSL) to include some Prolog code as the DSL inside the app as easily as people include SQL in their apps as the DSL for managing data? There's your answer for why it's not nearly as popular.
Depending on what you mean by "trivial", that can be syllogistically true, but if so, then a LOT of useful work can and is done in the realm of the 'trivial numbers involved'.
People have to do exactly the "State what I want done in another way to nudge the DB into a performant path" that he was talking about all the time in SQL.
But SQL is a specific domain where it is (probably) worth it.
SQL, at least prior to recursive CTEs, isn't Turing complete, and mitigates the problem, but the need to understand the evaluation strategy of a particular interpreter (which is more involved than understanding the unification algorithm underlying prolog) has been a common complaint about SQL for a long time.
But SQL makes it worthwhile by still being the most convenient existing method to talk to many databases that have desirable properties. Prolog doesn't currently have a recognized niche where it's the best language that is as fundamental as the one SQL has.
There is a language called Mercury that is an extension of a subset of Prolog, i.e., it takes some useful Prolog stuff away but adds some other useful stuff of its own. One of its additions is a mode system; essentially, a simple way to declare how you intend to use your predicates. You can then write your program once, putting the goals in clauses in any order, and the Mercury compiler will use the mode system to figure out all possible data flows and generate specialized, optimized code for each of them.
I like a lot of Mercury's ideas, but as a Prolog fan I find it does throw some things out that I rather like. If you're less attached to Prolog, you might like Mercury more :-)
However, at least in my experience, "nudging" the system in the right direction is much easier with Prolog than with imperative languages:
With Prolog, all it takes is often simply reordering a few goals, which some systems even perform automatically for you (see YAP, and some parts of SWI-Prolog for examples, in particular the RDF framework). There is also typically much less code to rewrite in the first place. Critically, the ability to reorder your goals depends on your working in the pure subset of the language. If you leave this subset, then also automatic or manual optimizations are orders of magnitude harder to apply.
Regarding Haskell and Prolog, my experience also differs significantly: From my experience and observation, the consequences of lazy evaluation are much harder to understand than Prolog's implicit search mechanism and backtracking.
I'm going to kindly ask you to give at least a couple of different examples of this, because it's another of those criticisms of Prolog that have no real basis in actual practice of programming in the language.
My source for that is that I've been programming all my side projects and two dissertations in Prolog since 2007 and I've never been in a situation where I had to think long and hard about how to order my predicates. Most of the time the correct ordering is pretty straight-forward.
This criticism, about ordering of clauses, has its historical root in the competition between Prolog and PLANNER - because in PLANNER predicate order did not matter, which made the language more "pure" but also less efficient (which in turn is why many fewer people have heard of PLANNER than have heard of Prolog). The real point about it is not that you may have to pull your hair out a couple of times while programming. The point is that in first-order logic, as in maths, predicate order doesn't matter and therefore, in a pure first-order logic language, it shouldn't matter either. The criticism is that Prolog lacks purity, in other words.
Which, if I may be so bold, is nonsense on stilts. The real world is not pure and you can't has purely anything languages. People who complain about Prolog's lack of purity are quite prepared to throw away the baby of a very useable, 99% declarative language, allegedly because the bathwater of the missing 1% is really messing their codes up. I'm just not convinced by that to be honest.
ancestor_of(A, P) :-
ancestor_of(A, P) :-
I don't think it has much to do with experience, or at least you don't need to be an advanced user to be aware of how clause ordering affects your results. So, yes, you need to understand a few things about how the language works before you can program anything non-trivial, but that's true for any language, isn't it?
Critics, mostly in the past, latched on to the few niggling non-declarative impurities like this to write off the entire language as "not truly declarative" when there isn't really anything usable that's closer to the ideal. Talk about binary logic...
Edit: Glad to see another user around here btw. Check out my new library. Just finished it and am looking for eyes on :)
The OP complained about "[having] to specify how exactly in which order the calculations need to be made", and you said that this complaint had "no real basis in actual practice" and "I've never been in a situation where I had to think long and hard about how to order my predicates". You did kind of dismiss their criticism.
BTW, you keep talking about clause ordering, where goal ordering within clauses is the more difficult issue, I think.
> I don't think it has much to do with experience, or at least you don't need to be an advanced user to be aware of how clause ordering affects your results.
Based on my experience as a teaching assistant in a Prolog univesity course for a few years, I would say that it does have to do with experience, and beginners often get clause and goal order wrong. It's true that soon they become aware that ordering affects the behavior of the program, but they often don't understand how it affects the program, so they semi-randomly reorder things until they find a permutation that seems to work.
> So, yes, you need to understand a few things about how the language works
Yes! But we collectively haven't figured out how to teach this well, and many people are left with a terrible first impression of Prolog and don't continue to a point where they understand enough to be effective.
> Check out my new library.
From the Readme it looks nice. I think iterm_value/3 should maybe be called iterm_nth/3, which is more idiomatic. Also, since you mention GNU Prolog's array library, maybe also add if your interface is compatible with it, and if not, why not? I think more standardization of libraries across implementations would be better for the Prolog community than more fragmentation.
>> BTW, you keep talking about clause ordering, where goal ordering within clauses is the more difficult issue, I think.
Yeah, sorry about that. I often refer to goals in the body of a predicate as clauses. I'm not sure if that's entirely wrong but it might be a bit confusing.
>> Based on my experience as a teaching assistant in a Prolog univesity course for a few years, I would say that it does have to do with experience, and beginners often get clause and goal order wrong. It's true that soon they become aware that ordering affects the behavior of the program, but they often don't understand how it affects the program, so they semi-randomly reorder things until they find a permutation that seems to work.
Isn't that how everything gets done? :)
I don't have any experience with teaching the language but I do understand it's a very hard subject to teach. I do remember that my first serious attempt at coding in Prolog was unbelievably frustrating. It took me a week to write a measly little predicate to get the next element of a list- because I really didn't understand what I was doing. I basically didn't need to do anything in the first place, I could have done what I wanted with member/2. But this was really not obvious to me from the descriptions of member/2 (or anything else) so I spent a week tracing my program and trying to figure out what the hell it was doing.
I'm used to the pain though, because I'm dumb-as-bricks and everything I've ever tried to learn, I had to really struggle through. So I persevered and now I'm a happy long-time user (that doesn't mean I don't still hurt, often). I understand why smarter students with a lower pain threshold would just give up on Prolog.
I was unhappy with the way Prolog was taught in my degree course. It was mostly "here's the syntax, here's some examples, go figure out the semantics on your own". Which is completely inappropriate for a language that's 99% semantics and basically has almost no syntax.
On the other hand, I think, as students, we had all been collectively spoiled by Java and Python and so on. If most languages are easy to pick up but hard to master, a language that's hard to pick up _and_ master is not going to be very popular.
>> From the Readme it looks nice. I think iterm_value/3 should maybe be called iterm_nth/3, which is more idiomatic. Also, since you mention GNU Prolog's array library, maybe also add if your interface is compatible with it, and if not, why not? I think more standardization of libraries across implementations would be better for the Prolog community than more fragmentation.
Thank you! I appreciate this. Those are good suggestions, particularly the one about following GNU Prolog's interface. You're absolutely correct about fragmentation and I'll try to follow your advice- but indexed_terms is not my array library yet! It's a precursor to that. I'm working on the actual array library now, based on indexed_terms. I just put indexed_terms out there hoping for some early feedback.
> I'm concerned that miserly nitpicking like this only serves to give programmers a good excuse to not even try to pick up Prolog.
Agreed, though "the issue you raised is not really an issue" could be read as "you're not smart enough for this", which is also a turn-off.
Anyway, even if goal ordering matters, we still have unification and backtracking as advantages over imperative languages.
Eh, I hope my original comment didn't come across as saying this. I'm fully aware that's the worse attitude for attracting more people to Prolog.
Luckily, there is a powerful way to detect such problems in Prolog, based on program slicing. The trick is to narrow down the program to those fragments that exhibit the same problem.
For example, let us start with the first program and one fact for parent_of/2:
ancestor_of(A, P) :-
?- ancestor_of(X, Y).
X = Y ;
X = a,
Y = b .
?- ancestor_of(X, Y), false.
% ancestor_of(P, P).
ancestor_of(A, P) :-
% parent_of(Parent, P).
ancestor_of(A, P) :-
The possible application of such reasoning is a rather unique property of Prolog. In fact, I know of no other programming language that even comes close to admitting such a general and easily applicable mechanism for reasoning about termination properties and other aspects!
More holds: Such reasoning can be automated! It is comparatively easy to write a Prolog program that systematically eliminates goals and clauses for you, and reasons about the resulting fragments. Some kinds of nontermination can even be automatically detected (the general problem is of course undecidable).
A few practical guidelines for writing efficient and especially terminating Prolog programs can also be derived from such considerations.
The same argument can be applied to GPGPU, data flow languages, IBM's Cell BE and, god forbid, GreenArray's GA144.
Or computation architecture - you already mentioned Haskell and I added data flow computation paradigm to that list.
They all about the same from sufficiently distant point of view. Prolog isn't all that worse.
To my experience, this is not at all the case. Prolog was "slow" in the '70s, when nothing was as fast as C. Nowadays, it's not just the case anymore. Try a modern Prolog compiler like YAC, with tabling and everything.
What kind of high-performance code were you trying to write, that didn't go as fast as you like? If it's something interesting I'm all for helping out.
Either of you have examples of such things?
EDIT: And what's the story on concurrency in terms of safety and scaling?
And it's currently hard for people to get high-performance SIMD and/or data flow code out of imperative code.
Why would you expect this not to happen for complex programs? Declarative languages are never going to have a compiler so smart you can completely forget about optimising. They let you get certain tasks done easily when performance isn't essential and when optimal performance is essential you need to know what's going on underneath.
Prolog execution is essentially navigating a massive search tree. There's always going to be plenty of ways to prune the tree and optimise traversal.
That lets me work quickly while my needs aren't great and while "the compiler is sufficiently smart", but when I hit a wall it gives me an obvious path forward that doesn't involve obscuring what I'm trying to do.
That isn't to say that Functional languages (Lisp, Haskell, Scala, etc) aren't as good; frankly, I like them better. There's just a mental gap that has to be crossed and for most developers I've met, that can be challenging. Why do things in a challenging way when I've got Java right here and it works just fine? (straw man, not my own view)
Prolog (logic programming) is a bigger gap, imho. It takes more effort for me to really understand Prolog code. Can do some beautiful things with it, but it's easier to have a few good developers be good at it and put their hard work behind a library/API than it is to have every other developer try to get over that gap.
Seems like the other difficulty is that its 'interface' (programmer facing portions of the language) derive from mathematical ideas which will already be very familiar to those working in the field, so it's very convenient for them, while for those outside mathematics (and more theoretical parts of CS) there will be surprising gaps when attempting to learn because of the implicit mathematical concepts in the interface.
For instance, Prolog is based on Horn clauses, a subset of first order predicate logic. Additionally, 'relations' are a central part of the language's interface—and if you have a background in pure math, this is great for you because it immediately tells you all kinds of things; if you don't, it's going to be confusing because much of the literature will assume you have similar experience reasoning about relations.
Seems like it would be possible to move those concepts out of the interface, while still using them in the internals...
In college I got deep enough into Prolog to write programs where cuts were required. Later, I got excited about miniKanren. Now I've been looking into constraint programming, and what I didn't understand until recently is that it basically generalizes the Prolog techniques to handle any kind of equation solving / relational search approach. You can do amazing stuff with these systems (e.g. look at HAL http://users.monash.edu/~mbanda/hal/), including write custom search algorithms (consider classic Prolog unification just 1 search strategy on a limited domain).
But I don't think there is any getting around the fact that this stuff gets conceptually harder, as it gets more powerful. The idea of solution sets as potentially infinite relations, rather than functionally determined things, is very powerful but there is an abstraction price.
And to the extent people can wrap their heads around it, there is a "letting go" in not writing programs in the style of deterministic algebraic manipulations. Part of this may reflect a bias, but the magic of delegating solution-finding to an algorithm is also dangerous. Are you comfortable not knowing how many answers there might be if you let the program keep running?
Similarly, consider Prolog's negation as failure. You can't express many formulas that you might like to involving "NOT". Negation is interpreted as a program just not returning anything. There are important reasons for this model, but again, it isn't necessarily as easy a model where NOT can be used freely.
BTW not many people seem to know that Japan had a huge, Manhattan-style program in the 80's. They made Prolog their lower tier.. like their assembly language, or close to it (this was back when people were still thinking different computing platforms needed different hardware). Some people ultimately blamed Prolog for what is generally regarded as the failure of this "fifth generation" project to leapfrog Western tech. But I think Prolog suffered unduly as a result. In fact that project was trying to to a bunch of ambitious things and they all hung on one another. For example, they were trying to make speech the UI, with 80's tech...
All this said I would encourage anyone to explore Prolog, miniKanren, Mercury, Shen or anything of that ilk.
I only want to briefly comment on the question "Are you comfortable not knowing how many answers there might be if you let the program keep running?", which I think is well worth thinking about. If you really think about this, then the most interesting programs you can write typically have this property, because they search for things that we do not even know exist, such as new theorems, some structures with unique properties, or even mistakes and race conditions in programs!
Also, in my experience, the more you focus on declarative properties and fundamental principles such as termination, the easier it gets to apply Prolog in practice. The difficulties I have seen many beginners struggle with often arise from trying to reason about Prolog programs in the way they reason about imperative programs, which indeed gets too hard to do in practice very quickly. In contrast, if you think in terms of generalizations and specializations, and program fragments, you can reason much more easily about Prolog programs in practice. However, it requires that you stay in the pure monotonic core of the language, of which constraints are also a subset.
The promise of Prolog was that you'd be able to just define the task requirements, then write the requirements in Prolog and magic would happen. But even the most experienced Prolog coders I know don't really think in Prolog. When faced with a coding task, they inevitably first (mentally) figure out the problem in an imperative form, perhaps with some recursion, and then they they ask themselves "okay, how do I convert that to logic and pattern-matching?"
If you're going to go through that process, then Prolog provides no value: it's just an extra step, and you might as well code in C (okay, Lisp).
I fear that Haskell will turn out the same way. After all monads are just monoids in the category of endofunctors, right?
Happens with lisp as well - I barely ever write prolog or lisp but I regularly think in them first before writing the solution in another language.
To make your Prolog code as general as you can, I recommend to also think in terms of relations. This definitely takes effort, which pays off in the increased generality.
My issue with Prolog is not being able to easily conceptualize 'simpler things' -- like parsing text-based data structures, processing arguments, reacting to errors at various levels of interactions with OS or Databases or or other (micro) services.
I believe, overtime as we learn how to define programs in ways that allow to confirm program correctness, models like Prolog's will become more and more prevalent.
Because, in my subjective view, proving correctness of declarative expressions is much simpler and more effective than proving correctness of 'implementation directives'.
That said, I have seen a lot of hostility towards rule-based systems in the last 2-3 years ... it is nonsensical IMHO.
It's more basic than that. The underlying CPU is unabashedly sequential and imperative. It even uses (shock, horror) bare GOTOs, i.e. conditional and unconditional jump instructions.
These other models might be more mathematically elegant in some abstract sense, sure, but I'd rather work with the underlying hardware than fight against it.
At the same time, LLVM is too high-level for some other things, such as Prolog's particular brands of stack unwinding and indirect jumps on backtracking.
What may look like a "functional gap" may just be the lack of ability for abstract thought, and more correlated with (in)ability to write well structured and generally "nice" code, especially to identify common patterns and create reusable code. Because functional/logical is just an implementation of some abstract pattern that is believed to be useful, and the person may have trouble understanding this pattern just as they would have problems understanding existing abstractions or creating their own abstractions.
The converse is true, of course: A person that cannot write well structured and "nice" code in other languages won't be able to magically do that in Prolog either.
For example you can implement a subset of Prolog inside Common Lisp, and thus you can employ the following paradigms as you wish:
- Object Oriented
- Logic Programming
For me, in year 2017, a programming language that implements only one paradigm, for example Haskell (almost a purely functional language), can't be considered a "general-purpose" language. Quite the opposite, it is a very specialized language, and it will work great for the problem domains where such paradigm fits perfectly into.
> Why do things in a challenging way when I've got Java right here and it works just fine?
How is that a straw man? If an individual were to say that, in what way are they refuting an argument you didn't advance?
When the author of the comment says "straw man, not my own view" in parentheses, he is saying "hey, this is what I imagine somebody who likes Java and not functional languages might think".
After that, tooling evolved, and they became easier to write major projects in. Why write a web app in Erlang when the JVM has every major templating system, an implementation of CommonMark, several high-performance JSON libraries, model validation, several mature build systems, and thousands of Stack Overflow answers?
This makes it hard for languages like Crystal or Nim to take off, but ON TOP OF THAT Prolog is asking its devs to completely change how they approach programming.
What would it take to make Prolog take off? A killer app. Which, in the 80's, looked like it was AI :-p
In comparison, writing web applications with SWI-Prolog is quite safe also thanks to the powerful quasi-quotation mechanism.
Barely. All kinds of people learned it quickly. They built web applications. It became a dominant web, application language. The momentum led to many improvements in its ecosystem and deficiencies. Prolog failed to do... any of that at PHP's scale.
This is just one of many instances where barely working solutions are used, partly stemming from a lack of alternatives at that time. Robust web frameworks are only now becoming gradually available in Prolog!
Now, someone wanting secure apps might not want PHP. They traditionally went with "safe" languages such as C# or Java whose runtimes were full of 0-days. They might find Ada, Component Pascal, or Rust with a web framework helpful. Even enterprise sector hasn't been writing most web pages is Ada, though. ;)
PHP worked for many people in what I can only consider a quite generous definition of "worked", in tandem with countless security problems that arise almost necessarily from its low-level data representation. I am not arguing that this has impeded its adoption, that most users care about this, or that it does not "thrive". These are not signs that it works in the way I prefer software and languages to work. In particular, for writing secure web applications, I cannot recommend PHP.
In the future, I expect to see more and more Prolog web applications. The necessary frameworks are now becoming available, with much better safety properties.
Next, you talk about the preference for secure, web apps. Being a high-assurance, security engineer, I'd expect you to immediately start talking about Ada, Opa w/ modified back end, SWIFT on SIF, Ur/Web, Haskell with secure framework... languages that are systematically dedigned to either make classes of vulnerabilities impossible or reduce them greatly. Also, esp Ada or SPARK whose low-level libraries are written in same language for as much safety as possible. Instead, you counter insecurity of PHP use by recommending a Prolog not designed for security with low-level components probably written in C or C++ since most Prologs are. It's probably gotten almist no pentesting to knock out low-hanging fruit either. If those are true, then that's highly hypocritical that you'd smear PHP while recommending something so insecure or uncertain in its security.
Far as downvotes, it was because you made a bogus claim about PHP's success with no substantiation. I certainly didnt downvote you cuz Im anti-censorship, voted to eliminate downvotes on Lobsters, and it's impossible to downvote a person replying to you. Check that claim by looking for downvote option next to my name. I instead prefer to counter bullshit with facts in actual comments. Also, with citations when I have time. As is clear by this one.
Please also note that I cited quasiquotations as one important advantage of Prolog. As for citations, the most relevant publication for this feature is:
Wielemaker, J., and Hendricks, M., Why It’s Nice to be Quoted: Quasiquoting for Prolog
This feature lets you easily build safe template engines in Prolog. Please let me know what you think, if you have time. I am in fact frequently looking for a pentester when building Prolog-based websites, are you interested in such a project?
But, all that said, I think you can write an unsafe library or framework or app in any language. In fact I recall a vulnerability in a widely-used Ruby library in which one of the API functions normally expected some kind of object, but if you passed it a string instead, it would call 'eval' on it for you and use the result. This "helpful" behavior was not documented.
My point is that there's more to educating the masses about writing secure code than just telling them not to use PHP.
If you are determined enough, you can definitely write an unsafe library or app also in Prolog, and security mistakes are also routinely found in Prolog implementations, just as in PHP or Java implementations. The main point is still though that a large class of security issues that easily arise in PHP user programs by one of the ways you mention is far, far less likely to occur in Prolog programs, due to the more direct, symbolic way you reason about data in Prolog, and additional mechanisms such as the mentioned quasiquotations which allow safe embeddings.
This is true. While we're at it, I think it's worth bringing up that the most powerful use of Prolog is embedding it in a LISP that is "batteries included." One like Racket. That way, one can use a safe, easy-to-analyze, functional style for most of the application, one or more DSL's for the templating (esp HTML), and Prolog operating on LISP structures when Prolog is best thing to handle it with. Alternatively, Shen uses something like Prolog as its type system so you can hand-roll a custom, type system for each component which might include security properties.
Far as good design, web, and security-oriented, the best I've seen is Opa language for doing that plus being productive.
Too bad they moved the backend to Node. Probably to latch onto an ecosystem getting momentum which is a lesson from Worse is Better philosophy. Most IT tech that didn't disappeared into history at some point. I'd have preferred it be Go if one of the new, popular things given its fast, simpler, and safe enough. Hell, if libraries aren't a concern, they can even output safe subset of C with all the checks enabled like Pieter Hintjens did with iMatix DSL's.
Hell, it might be dead. I'll have to email them some time this week to find out what's up. Fortunately, it's open-source so others can pick up where they left off if they want. Or do a clean-slate work with similar capabilities.
Yeah, it seems dead for now.
That's exactly what they do. It's why it succeeded at original goal, got popular, and is currently a mess for folks who've seen better-designed things. Further, I'll add that it doesn't appear to have been designed at all so much as a hacked together pile of features that were useful to the author at the time then extended over time. Just like C was when I researched it.
It's even worse than the two of you think. It gets worse during the times they tried to "design" some aspect of it. The hashing of names was worth bookmarking:
re static analysis
I had one project on that saved back in the PHP 5 days. What's the current state-of-the-art in static analysis tools for PHP in both commercial and FOSS? And does that sub-field have any that can prove the absence of common, severe errors like Astree Analyzer does for C and SPARK for Ada? The PHP equivalent of severe errors anyway. Especially anything allowing code injection.
"widely-used Ruby library in which one of the API functions normally expected some kind of object, but if you passed it a string instead, it would call 'eval' on it for you and use the result. This "helpful" behavior was not documented."
When eval will happen or whether risky constructs like that are used at all is one of the things that would be on my list of requirements for static analysis tools. I should be able to spot those kinds of issues in one pass before using a library. In theory anyway.
"My point is that there's more to educating the masses about writing secure code than just telling them not to use PHP."
It takes some books and practice. Also, you can get pretty far telling them to use Airship CMS since it was designed for security. I don't know anything else about it, though, since I don't do web apps or PHP.
I don't know, sorry. I pay very little attention to PHP.
> And does that sub-field have any that can prove the absence of common, severe errors like Astree Analyzer does for C and SPARK for Ada?
Not to my knowledge.
Although not interested in pentests, I do collect info on Prolog and logic programming in general as I think they can be a nice cheat on doing specs and code with no mismatch. Specs are the code (sort of). Additionally, verified systems use theorem provers with some using first-order logic. One can do the tool or reference imementation in those. A high-performance Prolog can iterate it quickly or be used before slower, verified provers to knock out bugs faster. It's also interesting in making apps more maintainable maybe.
So, I appreciate the link and offer. I'll definitely read it tonight. I'm sure the capability you describe can knock out web errors despite risks in underlying TCB. In return, I offer you two that are high-performance and high-assurance respectively with a bonus that's practical.
You should seriously check out Mercury if you havent. It's Prolog plus strengths from functional programming. It's also used by Mission Critical IT for business software. If you don't need Prolog's libraries, then it should (Im speculating) be superior based on features and performance alone. Just going with second hand data here cuz Im not a logic programmer. :)
The evidence is there. https://www.cvedetails.com/top-50-vendors.php
Besides, deep first recursive searches are easy to write and almost never work well in practice, so even the problems that are greatly represented in Prolog either do not get efficient binaries from the existent Prolog compilers or are easy enough to write in another language that little is lost on the transition (often both).
That said, I do think search based programming is underrated. There ought to be some representation for theorem resolvers that is good for general purpose programming. It's just that nobody found it yet.
There are some nice examples of search-based programming in Z3 (using the Python API) in https://yurichev.com/writings/SAT_SMT_draft-EN.pdf , if that is along the lines of what you meant.
It's about the ONLY PortableApp that offers any kind of program development capability beyond text editing, that I could tell. No compilers, no interpreters outside of this and a couple of SQLite packages. Anyway, I pulled this one down, fired it up, and... no worky. I got a console, theoretically I could execute commands, but try and access the help or docs, and it bails out with an error, telling me xpce can't be loaded, because load_foreign_library/1 is not defined? At least half the menu commands failed with the same error, closing out the app in the process. Basically, the app is impossible to use.
So, there's my answer, one that can be applied to many otherwise promising languages. Any system looking to gain traction really needs to go out of its way to Just Work; to make itself readily available, easily installable, immediately functional, and with clear documentation right at hand. You can carry on 'til you're blue in the face about lazy programmers unwilling to learn a simple build-and-install process, but with the ready availability of other environments that generally Just Work, there's really no excuse. At least, that's how I feel about it.
Is it not possible that this "PortableApps" package broke the program?
Please note that SWI-Prolog is free software and depends on such contributions or at least reports to work reliably on all platforms. Alternatively, there are also several commercial Prolog implementations with professional support to help you in case of difficulties.
If I go grab the python zip for my platform it is already a "portable app" what is there for portable apps to do? The only dev environments I can think of that aren't portable by default are not free, so portable apps wouldn't be able to release them anyway.
I've linked your post to the ##prolog channel on freenode but I'm mostly a n00b so if at some point you want help figuring it out, you'd probably be better joining yourself and giving a more complete report.
Our architecture/use-case: At Netsil, stateful packet processing pipelines are written in declarative rules and materialized tables are backed by SQL compatible embedded in-mem DB. Tuples are executed in parallel and parallelism is controlled by specifying context constraints in rules (e.g. packets within same TCP flow should be processed in order). Further, Datalog workflows are distributable by providing "location specifier" in rules -- i.e. Tuples and attributes serialize to protocol buffers and can be sent/received over ZMQ. Also, the materialized tables in Datalog can be made to sync up with Zookeeper, allowing distributed stream processors to do service discovery and so on. It's a pretty sophisticated runtime/compiler, written primarily in C/C++ for optimal performance. The underlying runtime uses a combination of Intel TBB and Boost ASIO.
We are in general big fans of declarative approaches as they have saved us a lot of time, allowing our small team to leapfrog the competition. You can learn more about our architecture here: https://netsil.com/blog/listen-to-your-apis-see-your-apps/
Disclaimer: I am co-founder of Netsil (www.netsil.com).
For example, when dealing with bitemporal data (common in finance) you might have a set of facts with two date range attributes. Lets simplify by saying we have a set of facts each having a start date and end date. Here is some non-working Prolog that could work if there was such a capability.
ticker(entity('TimeWarner'), 'TWC', date(1999-01-01), date(2014-04-31)).
ticker(entity('TimeWarner'), 'AOL', date(2014-05-01), date(9999-01-01)).
current_at(ticker(entity(_), _, Start, End), T) :-
T @> Start,
End @> T.
-- find current ticker for Time Warner
current_at(ticker(entity('TimeWarner'), _, _), date(2017-05-29))
-- SWI prolog can not unify the above clause!
Now, it turns out that this sort of exists already, it's called Answer Set programming. There is one implementation out there  - but I didn't feel like dredging up an old research project.
 - http://potassco.sourceforge.net/
ticker('TimeWarner', 'TWC', date(1999-01-01), date(2014-04-31)).
ticker('TimeWarner', 'AOL', date(2014-05-01), date(9999-01-01)).
transform_date(date(Y-M-D), R) :-
R is Y * 416 + M * 32 + D.
after(Adate, Bdate) :-
A @> B.
current_at(Name, C, Start, End, T) :-
ticker(Name, C, Start, End),
?- current_at('TimeWarner', C, S, E, date(2017-05-29)).
C = 'AOL',
S = date(2014-5-1),
E = date(9999-1-1).
ticker(entity('TimeWarner'), 'TWC', date(1999-01-01), date(2014-04-31)).
ticker(entity('TimeWarner'), 'AOL', date(2014-05-01), date(9999-01-01)).
[EDIT:] In case it wasn't clear, I'm suggesting that it would be better to write this:
ticker(entity('TimeWarner'), 'TWC', date(1999-01-01), date(2014-05-01)).
ticker(entity('TimeWarner'), 'AOL', date(2014-05-01), date(9999-01-01)).
ticker(entity('TimeWarner'), 'TWC', date('1999-01-01'), date('2014-04-31')).
ticker(entity('TimeWarner'), 'AOL', date('2014-05-01'), date('9999-01-01')).
,Tp > DS
,DE > Tp.
?- current_at('TimeWarner', TLC, DateStart, DateEnd, date('2017-05-29')).
TLC = 'AOL',
DateStart = '2014-05-01',
DateEnd = '9999-01-01'.
If you really require the dates to be in yyyy-mm-dd format, the code above becomes slightly more verbose but it's not the end of the world.
current_at(Entity, C, Start, End, Current) :-
ticker(Entity, C, Start, End),
Start @< Current,
Current @< End.
This works with the database you posted:
?- current_at(entity('TimeWarner'), C, S, E, date(2017-05-29)).
C = 'AOL',
S = date(2014-5-1),
E = date(9999-1-1).
For those less familiar with Prolog, in Prolog terms, a date in a format like 2014-5-1 is a compound term, a predicate -/2 with the operator "-" as a functor and 2 arguments: 2014-5 and 1. The first of those is, again, a compound, also with functor - and two arguments, 2014 and 5. So the entire date is a recursive term.
A comparison predicate, like @>/2, etc, will walk over the arguments of this term and compare them as it goes - but since a "date" is not a Prolog type (only "number" and "atom" really are) it will not treat a term meant as a date in any special manner.
Which, in the end, means that the following queries are all true:
?- 2014-5-1 @< 9999-1-1.
?- 2014-5-0 @< 9999-1-1.
?- 2014-5 @< 9999-1-1.
?- 2014-50-1 @< 9999-1-1.
?- form_time([2014-5-1], T).
T = datetime(56778, _5804),
_5804 in 0..86399999999999.
?- form_time([after(2014-5-1)], T).
T = datetime(_1304, _1306),
_1304 in 56778..514671,
_1346 in 4905619200000000001..44467660799999999999,
_1416 in 4905619200000000000..4905705599999999999,
_1484 in 0..86399999999999,
_1306 in 0..86399999999999.
I've never seen one that came close to a real Prolog. Having a library that implements a slow, informally specified tiny subset of Prolog doesn't say anything about the usefulness of real Prolog.
and some do a lot of largely independent but very regular work that would execute quite well on a simd/vector/smt array.
people have come up with some tricks to map control flow into simd (like some really cool parser tricks), but i think in general those have regimes where they have sub-serial performance.
so maybe? if you had the magic compiler? or you provided some manual annotation support? or a robust ffi?
for sql, which has a much more limited footprint, there's been some cool vectorization work.
..and, sadly, it didn't look like C
// strangely Prolog is listed as a spelling error by Firefox...
Eventually the ideas in Prolog will make their way into a general purpose language where the relationship between the logical components and the algorithmic components of a program is harmonious instead of a constant conflict.
Here's one approach: http://www.swi-prolog.org/pldoc/man?section=pwp
Scroll down for examples. You might disagree, but I think the fit between Prolog semantics and expanding XML templates is surprisingly natural.
This is the central problem, I think. There are problems which are traditionally solved using different approach, not a Prolog-like, and we're mostly comfortable with this approach unlike with others.
A successful general-purpose language has to solve all - or at least all important - problems sufficiently well. In communications to each other we use natural languages, which conveniently allow to omit hard parts if we wish so, so they are rather easily bendable for everything. With precise languages we so far have to either hop paradigms or use clunky detailing. We either need to look to everything through a Prolog (or other language) lens or keep using a variety of tools.
When it comes to mundane tasks such as opening a file and reading its contents as a string or accessing databases, things get even more difficult. Technically, this is all possible with Prolog, too. It's just not exactly fun to do so.
As to opening a file and reading its contents as a string:
I find it best to use Ulrich Neumerkel's library(pio) to accomplish this task. Importantly, this lets you apply a DCG to a file in a pure way. I start with a DCG that simply describes a list of characters:
content() --> .
content([C|Cs]) --> [C], content(Cs).
?- phrase_from_file(content(Cs), 'content.pl').
Cs = [c, o, n, t, e, n, t, '(', '['|...] .
As to accessing databases: That's quite straight-forward too, in particular if we take into account the following: If you are really using Prolog professionally, then typically Prolog is the database. You simply assert facts, and retrieve them by querying the built-in Prolog database.
Personally, I find Prolog queries much more convenient and also more expressive than SQL, and great fun too.
That was my personal aha moment with Prolog when I realised that Prolog statements are quite similar to SQL queries in that you declaratively define the results you expect instead of the exact directions describing how to arrive at those results.
It's a very powerful and elegant concept.
Prolog is not so popular for general purpose computing since: compilers are inconsistent, compatibility problems, difficult debugging, high maintenance costs, few experts, steep learning curves (my professor joked that the more computer science the student is exposed to, the harder is the mental switch to Prolog).
Prolog remains great for education on logic, NLP parsers, recursion.
From above page:
>Prince was developed using the Mercury functional logic programming language.
For example, I can write some code in clojure, that, for example, implements a UI which then calls core.logic to do some processing, which then calls some clojure to pull the logic data from a database. If I wanted to use prolog, I'd have to do something like: (other language -> ffi -> prolog -> ffi -> other language) which is usually too much effort for me to bother.
It is not intuitive, and most programs aren't logical problems in the sense that the prolog can solve. It is highly specialized.
It belongs to an era -- and this era isn't "over" -- when the primary manner of solving AI was symbolic.
Java, C, and many other programming languages are like chess: There are many syntactic rules, and by learning them, you already obtain a rough overview of what you can do in principle. You try out these constructs, and get a sense that you have accomplished something, even if it is rather worthless, and more complex tasks are extremely hard to carry out successfully in these languages.
Prolog is more like Go: The syntax is very simple, and there is essentially only a single language element, the logical rule. This means that even if you know, syntactically and semantically, almost everything about the language, you have no idea what to do at first. This can be rather frustrating. From this, beginners easily arrive at the misguided conclusion that the language is useless, or restricted to very specific applications. But it only means they have not grasped its true power and flexibility! Getting to the core of Prolog is hard, and requires systematic guidance.
This inherent difficulty is frequently compounded by a rather ineffective and outdated didactic approach which, at its worst, stresses difficult and mostly superseded procedural aspects over more important declarative principles and more modern solutions like constraints. This easily gives the misguided impression that the language is rather imperative and limited in nature, and again causes many students to dismiss it due to their wrong impressions.
A third reason is found in the implementational complexity: From a user's perspective, a major attraction of Prolog is its ease of use due to the syntactic simplicity, powerful implicit search mechanism, generality of predicates etc. which are features that are rather specific to logic programming languages. The complexity of all this is shifted to the implementation level: In order to make all this both powerful and efficient, the implementation must do many things for you. This means you need, among other things and in no particular order: an efficient garbage collector, JIT indexing, a fitting virtual machine architecture, a fast implementation of unbounded integers, rational numbers, good exception handling, ISO compliance, many goodies like tabling, an efficient implementation of constraints over integers, Boolean variables, Herbrand terms etc. Most of these topics are even now still subject of active research in the logic programming community, with different advantages and trade-offs. Implementing an efficient Prolog system is a project that easily takes 30 to 40 years. In fact, we are only now getting to the point where systems become sufficiently robust and feature-rich to run complex client/server applications for months and years. In such complexities, you find the answer why Prolog isn't more popular yet. It has simply taken a few decades to implement all this in satisfactory ways, and this work is still ongoing. In my view, Prolog is now becoming interesting.
To the second point, Prolog already is a great general-purpose language. You can use it for almost all applications that are currently written in Java and Python, for example. Of course, there are always some features that are worth adding on top or via extensions, and certain tasks would benefit from this. For example, you can add extensions for type checking, and for fast arrays. Various Prolog implementations are already experimenting with such extensions. Many extensions can in fact be implemented via term and goal expansion, a facility that is analogous to macros in Lisp, or via simple reasoning over given programs.
*) machine and assembly languages: simple syntax, simple semantics
*) procedural languages: complex syntax, AST not available, many language constructs
*) Smalltalk, LISP, Prolog: AST available, few language constructs
Would I write an operating system in Java? No. A Unix command-line too? No. A video game? Probably not.
In my view, this casts some doubt on the test's adequacy to answer the initial question. In the concrete case of Java, I think marketing and other influences also played important roles. It could be possible to apply these advantages to Prolog too.
Operating system in Prolog: Non-trivial, i.e., like in Java. Command-line tool: Pretty simple: My main predicate receives command line arguments. There are countless such examples already, included in the SWI-Prolog distribution (for example). And, as you say, for games I would do pretty much the same as in any language also in Prolog.
In my view, this still leaves the same doubt about these questions: Can we distinguish Java from Prolog in any way by answering them?
To use it in a more general purpose sense, I think you need a couple of things.
First, you need some really great training and or books showing practical examples as well as how to overcome common issues with performance.
Second, you need more people contributing the some of the open source options like SWI Prolog.
Third, I think you need more ready to use bindings for the popular languages out there. SWI Prolog provides a C interface, but if you had an interface to say Node, Go, Rust etc that was simple to install with some good examples, you could reach more people.
both have a different/specific (as in non-mainstream) thinking way, and it is not easy to switch from common programming languages to these. and since it isn't easy, most people don't go deeper on them
from a company point of view: if it's hard to find a good Prolog/haskell developer, then they will be more expensive, so they stick with the common Java/C/C#/Python/Ruby/JS stack
Attributes: text, points
Attributes: name, emailAdress(as a struct of first part, domain, top-level-domain)
Admin(a special user):
additional Attributes: set of rights(can delete, can hide, can modify)
Finally, there is an n-1 assocation between Comment and User and I want to make some queries about this domain.
We can represent comments by facts like:
comment_id_user_text_points(3, 1, 'hello!', ).
Each user can likewise be represented as follows, relating a unique ID to a name and e-mail address:
user_id_name_email(1, 'randomUser1122', [random,'ycombinator.com']).
?- user_id_name_email(ID, _, _),
comment_id_user_text_points(_, ID, Text, _).
This is all completely analogous to how you would represent such data in any database system. It's more convenient in Prolog though for several reasons.
How would you make sure that the rights of admins are only delete, hide, and modify (and not e.g. walk, talk, chalk) ?
If I use
admin_id_rights(Id, Rights):- numeric(ID), sort(Right, SortedRights), subset(SortedRights, [delete, hide, modify]). ,
Moreover, how would you then 'create' objects without adding them with asserta. Assume that all the user, admins, comments are written to a text file which should be queried. I can easily read them into some nested compounds with DCGs. However, I would like to create objects(e.g.like Java's new) to check that they adhere to the contraints of my model.
As to your first question, we are now talking about (database) integrity constraints. So I would first state which rights are admissible at all by clearly defining what we consider a right. For example, in this concrete case:
user_bad_right(ID, R) :-
?- user_bad_right(ID, R).
As for creating objects: This is a good case of using the dynamic database, i.e., predicates like assert/1 and assertz/1. The database is very good for frequently retrieving information, but not good for frequently updating the information. This is a fitting situation: Comments are presumably only posted once, and in that case, you can simply add such facts to the database.
But you can of course also make all this explicit, and first construct a term of the form comment_id_user_text_points(3, 1, 'hello!', ), and then reason about such terms (instead of reasoning about the asserted facts). Note that this term looks exactly like the fact syntactically, due to the homoiconic nature of Prolog. Therefore, you have many ways to reason about your data. You can even write all such terms to an external file, and simply consult all facts (and even rules) it contains by invoking consult/1 dynamically.
Since Prolog is now gaining more traction, I am adding to my favourites all Prolog links which I consider noteworthy or recommended reading. Please see my profile for more information, and also for future updates.
comment("I tried programming in prolog...", 100, randomUser1122).
comment("Here's a shot.", 2, tom_mellior).
user(randomUser1122, email(random, randomdomain, com)).
user(tom_mellior, email(tom, mellior, su)).
?- user(User, email(_, _, su)), admin_right(User, right).
?- comment(_, Score, User), Score >= 10, admin_right(User, Right).
Score = 100,
User = randomUser1122,
Right = delete ;
Score = 100,
User = randomUser1122,
Right = modify .
In my experience Prolog is conceptually the coolest, but practically the worst when trying to get anything done.
Basically, writing a program in Prolog is like solving a puzzle. Nobody wants to solve an additional "puzzle" on top of their already existing problem they set out to solve by programming. (Unless they're doing it for fun)
"Who Killed Prolog?" by Maarten van Emden. https://vanemden.wordpress.com/2010/08/21/who-killed-prolog/
Posits that the huge hype put into it by the Japanese Fifth Generation Computing Project, which it failed to live up to, essentially killed off interest in the language, which it never recovered from.
"Why did Prolog lose steam?", a sort-of reply I wrote. http://www.kmjn.org/notes/prolog_lost_steam.html
Posits instead that much of the low-hanging declarative fruit has been picked off by other, more specialized languages, ranging from SQL to production-rule systems to even LINQ, so Prolog no longer is the default go-to declarative programming language.
That would give you an error, and rightly so. You can't really expect to sort what's not there. Or am I misunderstanding your comment somehow? I don't quite understand what you mean by "inlining" and how that can help sort a list of no-values?
p(X) :- sorted(X).
For example, let us describe sorted lists of integers:
Now the point:
Exactly as you say, the predicate works for concrete lists of integers that are already given:
X in inf..2.
We can obtain concrete solutions with enumeration predicates. For example:
?- Vs = [X,Y,Z], sorted(Vs), Vs ins 1..3, label(Vs).
Vs = [1, 1, 1], X = Y, Y = Z, Z = 1 ;
Vs = [1, 1, 2], X = Y, Y = 1, Z = 2 ;
Vs = [1, 1, 3], X = Y, Y = 1, Z = 3 ;
Vs = [1, 2, 2], X = 1, Y = Z, Z = 2 .
Ls =  ;
Ls = [_28] ;
Ls = [_170, _176],
Ls = [_1194, _1200, _1206],
Sure, but X is a variable, implicitly universally quantified, and p(X) <- sorted(X) is only going to be true for some values of X. Unless you apply some stricter constraints, for example, as indicated below, you can't really know for which values the relation is true.
Are you saying that, in a purely declarative context, p(X) :- sorted(X) is always true? That depends entirely on the definition of sorted/1. For instance, the following is trivially always false:
I'm still a bit unsure about what you are trying to say and what you mean with inlining properties etc so apologies if I haven't addressed your concerns.
The reason is that declarative languages in general are not as immediate as imperative languages. Nothing intrinsic about declarative languages themselves, this has to do mostly with the fact that we are taught and we are exposed to imperative languages first and then we hear about other more "exotic" paradigms.
Second issue. In a sense, the elegance of the language has been the biggest weakness. The academics loved to play with conceptual matters. A lot of effort went into papers on semantics and mapping different types of reasoning but not much into tools, IDEs, or compilers. The community didn't make enough libraries, built the right abstractions and software engineering practice. Every time you start a project in Prolog you are starting from scratch.
Also, the elegance of the language is the reason a lot of people approaching Prolog get quite demanding. "It's logic programming so why do we have to use a cut?". Yet most programmers have no problems with the quirks of C++ or Java.
The third one is Prolog hasn't found it's excellence area. C, Go, Scala, Java all have their on strengths and scenarios where they are the best candidates. Prolog would make in theory an excellent candidate for representing complex domains based on rules, covering a module of a larger piece of software. It would be perfect to represent the rules of a board game or the knowledge of a chatbot, but for a number of reason that's no happening. How does the reasoner scale with larger datasets? Will it be hard to manage? Is there an example of something similar being attempted?
In my very personal opinion, as a community we should learn from these lessons, take the best bits of Prolog and make something new.
At Grakn.ai (https://grakn.ai/) we are working on a graph database that uses an inference layer that is based on Prolog's resolution, maybe worth having a look. The idea there is that Prolog maybe shouldn't become a great general-purpose language but its bests parts must be used as a base for the next advances in knowledge representation and reasoning.
In that thread, I show how you can express the sample relation in Prolog. Maybe you can go into more detail, either here or in that thread, on the advantages of Grakn over Prolog for such cases?
As to your second issue above: Please note that there are many collections and even entire books that describe antipatterns of Java, C++ and many other programming languages. In my experience, programmers from these communities quickly learn to avoid these antipatterns. In the Prolog community, this happens more slowly, but it does happen too. There are very good reasons to avoid !/0 and other impure constructs in logic programs. In my view, the key issue is to find and teach better constructs that should be used instead, and the best alternative language constructs are still waiting to be discovered.
In this spirit, I fully agree with you that we should keep Prolog's best aspects, and extend them as far as we can into the directions we need.
It's a very specialized system in my view, so there is no hope of it ever becoming general-purpose. But maybe that's because I don't know enough of Prolog.
In particular, I recommend the following publication:
if_/3 and other declarative predicates like dif/2 are more general alternatives and likely good solutions in the cases you mention. They are still quite recent, at least if we ignore the fact that dif/2 was even available in the very first Prolog system, sometimes called Prolog 0.
As to why it's not more popular, I've thought about this very ofen and I don't
have an answer. What I know for sure is it's never going to become more
popular until people move on from that silly soundbite about its "general
purpose"-ness, which never made any sense to begin with.
People program "general purpose" stuff in languages that are much worse for
"programming in the large" than Prolog. Most of the big operating systems are
written in C, large swathes of game code is in some assembly language or
other, about 60% of enterprise code is in Java and most supercomputing code is
in FORTRAN fer chrissake. Not to mention, all of the internet is in
snippets of code to manage buttons and text fields and stuff. You're not going
Prolog is already a general-purpose language. All you need to do is have a look at the library section in the Swi-Prolog documentation. Besides the usual suspects (constraint logic, tabling, lambdas and such and of course parsing all possible text-based formats ever in time dt) we find a bunch of diverse libraries:
An http package for all your client/server needs 
A library for opening web-pages in a browser in a system-agnostic manner 
A library for command-line parsing 
An RDF parser and a semantic web library
A package manager 
A random numbers generation library
A library for manipulating the Windows registry 
A library for solving linear programming problems 
A thread pool management library 
And a whole lot of support for a bunch of other stuff like corouting, multithreaded applications, a profiler, terminal control, an ODBC interface, an interface to Protocol Buffers, bindings to zlib, GNU readline, and so on and so forth.
In what sense is all that not "general purpose"?
I have never used Prolog for anything serious, but I think it has great potential and I really want to like it. It almost looks like the perfect programming model, and I want to understand why it's not.
What you're asking is something that the logic programming community has asked itself very often, but it's very hard to answer with any certainty.
One thing that should be noted is that Prolog was very popular, for a brief period of time, in the 1980's. For instance, check out this year's TIOBE index report:
If you scroll down to the section titled "Very Long Term History" you'll see Prolog listed as the 3d most popular language in 1987 (behind Lisp in second place and C in first, and before C++ in fourth). By 1992, it had dropped to 14th place and then it was pretty much all downhill from there.
As a personal anecdote, I've read a number of Prolog texbtooks from the late '80s and early '90s that begin with saying that it is very important to learn Prolog because it is sure to become a very popular language in the future.
In other words, Prolog did have its time in the sun. But then it fell from grace.
As far as I can tell, the most likely narrative to explain this meteoric change in fortunes is the one that pins the blame on the association of Prolog and logic programming to the Japanese Fifth Generation Computer project. This (theoretical) explanation of the rise and fall in popularity of Prolog is proposed here:
In short, how this story goes is that, when Japan chose to use logic programming for its Fifth Generation Computer project, which was seen as potentially extremely disruptive by the West, companies and academics in Europe and the USA suddendly took a great interest in Prolog, thinking that the Japanese must know something they didn't. Then, when the Japanese project flopped, it took Prolog with it.
I stress again it's just a theory, but, to me in any case, it's at least very plausible.
Clojure also brings with it logic and relational programming which is likely the go-to choice of the Clojurist for expressing work-flow or permissions management type problems. Not exactly Prolog but it's the same difference.
Also, I once attended a talk by a Japanese researcher who was intimately involved in the project. One significant phenomenon at that time was that commodity hardware was progressing much faster than had been anticipated, in the end eclipsing the specialized designs that were being worked on.
There are other reasons too, and here I can only say that Prolog as it is today had very little to do with the outcome. In fact I think this would be a great follow-up question to the present discussion!
This is exactly what happened. A slow language like Prolog on OK hardware that accelerates it can't compete with a fast language like (not-Prolog) on highly-custom, top-of-the-line hardware that accelerates it. It can't in the general case and plenty times not in the special case. Now, that was when Moore's Law was in full swing. There's potential now for that pendulum to swing in reverse for something like this.
Your particular example of transitive relations is one of the most elementary examples that are typically solved in basic Prolog courses as exercises, when defining reachability: B is reachable from A if there is an arc between A and B. Transitivity can easily be defined by a rule, as you correctly mention. For instance, let us take your example:
fact(a > b).
fact(b > c).
transitive(A, B) :- fact(F), F =.. [_,A,B].
transitive(A, C) :- fact(F), F =.. [_,A,B], transitive(B, C).
?- transitive(X, Y).
X = a,
Y = b ;
X = b,
Y = c ;
X = a,
Y = c ;
We can also reason explicitly about the relation, by making the functor (which may, or may not be defined as an operator) available as a predicate argument:
transitive(Op, A, B) :- fact(F), F =.. [Op,A,B].
transitive(Op, A, C) :- fact(F), F =.. [Op,A,B], transitive(Op, B, C).
?- transitive(Op, X, c).
Op = (>),
X = b ;
Op = (>),
X = a ;
?- transitive(Op, a, Y).
Op = (>),
Y = b ;
Op = (>),
Y = c ;
Maybe prolog has just not found the right context to run in?
However, Prolog lets you tackle not only propositional logic, but tasks that go far beyond this, belonging to a logic called classical first order logic, of which propositional logic is only a subset. In first order logic, we reason about predicates between terms, and this lets you tackle much more complex tasks, far beyond what SAT/SMT solvers can solve for you.
In fact, first order logic is so powerful that it lets you describe everything you can in principle perform with a computer. It lets you describe how a compiler works, for example. Or how numerical integration works. And all other computations you have ever seen computers perform.
In short, Prolog is a programming language, and can in fact even be used to implement a SAT solver, which is impossible to do with just a SAT solver. Many Prolog implementations even ship with a SAT solver as one of their libraries.
There are also even higher-order constructs in Prolog, such as predicates that let you invoke other predicates, making programming in Prolog very expressive and convenient.
Not quite sure what you're referring to here, but induction schemata seem to be a notable exception. There's a reason higher-order logic is often preferred for software verification.
I guess FOL is adequate if you're allowed to have an infinite number of axioms, but that doesn't seem very satisfying (pun intended).
In Prolog, the more natural approach that closely corresponds to SMT is simply implementing the theory as a constraint solver. For example, check out CLP(FD) and CLP(Q) for constraint solvers over integers and rational numbers, respectively. They let you formulate statements over these theories, and search for solutions. Note though that solving equations over the second-order theory Z (i.e., integers) is not decidable either (only semi-decidable), and so you may search indefinitely if there is no solution.
Importantly, constraints over these theories blend in completely seamlessly into Prolog, since they are simply available as predicates. For example, we can write:
?- A^N + B^N #= C^N,
N #> 2,
[A,B,C] ins 1..sup.
ASP is based on one of those Prolog-semantic proposals, the "stable-model semantics", which competed with other proposals like the "well-founded semantics". Although these are first-order in principle, existing practical tools only implement propositional solvers. ASP systems still take a Prolog-like input language that looks first-order, but they work by first "grounding" the first-order formulae to a propositional representation, and then solving them. If you make suitable assumptions about finite domains etc. this has the same expressivity, but sometimes causes blow-up (other times it causes surprisingly fast-running programs, though).
This is a good open-source ASP system: https://potassco.org/