Knowing the history of Prolog can be very useful to make progress in this area, because quite often, progress in this area means returning to what earlier systems already did.
For instance, already in the very first Prolog system, Marseille Prolog, a string like "hello" was treated as a list of characters, i.e., [h,e,l,l,o]. This was great for usability, as one would expect from a language that was designed for natural language processing.
Later systems switched this to lists of character codes, denoting code points in the used encoding. For instance, when using ASCII or one of its supersets, "hello" would then mean [104,101,108,108,111], which is much less readable in answers.
Still other systems introduced their own ad hoc types that were not lists and therefore could not be handled with Prolog's built-in mechanisms for reasoning about lists, most notably Definite Clause Grammars (DCGs), and thus even required their own dedicated and usually moded predicates for reasoning about them, thereby severely limiting their generality while increasing the complexity of application code.
Only the most recent Prolog systems, such as Tau Prolog and Scryer Prolog, are now doing what Marseille Prolog did, namely treating strings as lists of characters, and are also beginning to combine this with efficient internal representations to finally get all advantages at once: usability, generality, simplicity and efficiency.
Type checks are another example where Prolog systems are now going back to the solid foundations of Marseille Prolog, after decades of errands. For instance, to preserve monotonicity of Prolog programs, type checks must raise instantiation errors or delay the check if no decision can be made, e.g., the following would be a logically well-justified response because X can still become an integer:
ERROR: Arguments are not sufficiently instantiated
A third example for this are constraints like dif/2, which is a relation with very desirable logical properties, denoting syntactic disequality of terms, i.e., it is true if and only if its arguments are different, and it delays checks until a definite decision can be made. This was present even in the predecessor of Marseille Prolog, Prolog 0, and then treated at best as a strange curiosity or not present at all in many later systems before it became more widely available again.
Using constraints also enables an even more elegant solution for the above cases. For instance, using CLP(ℤ), constraint logic programming over integers, we can constrain a variable to integers in the most general way:
?- X in inf..sup.
X in inf..sup.
Perhaps it's habit, but I expect integer(X) to fail rather than raise an error because X is a variable and not an integer. I'm not sure what the ISO standard says here, but if integer(X) fails this is a natural way to test the type of an object:
Also, the following is consistent behaviour:
ERROR: Arguments are not sufficiently instantiated
By monotonicity, I expect that if a query succeeds unconditionally, then a generalization certainly does not fail (otherwise, adding a constraint would increase the set of solutions). But this is obviously the case here, so monotonicity is violated: Adding a constraint can make a query succeed that previously failed.
?- X = 0, integer(X).
X = 0.
If monotonicity is broken, then iterative deepening is no longer applicable, because increasing the depth of the search may now invalidate conclusions that were derived earlier. However, one of the foremost attractions of Prolog is the prospect of executing the same program with different evaluation strategies.
And, yes: The above behaviour of integer/1 is in fact prescribed by the ISO standard. Unfortunately, DEC 10 Prolog chose to replace instantiation errors by silent failures, and this has been perpetuated in the Edinburgh tradition for type tests including the standard.
A way to correct this is therefore to introduce a new family of type tests, the ..._si/1 family of predicates as available in library(si). They are what the standard type tests should have been in the first place to preserve these desirable logical properties.
Thank you for clarifying. If I understand correctly you are saying that the
success set of a query should increase monotonically with the generality of
the query. If so, I don't think that's an unreasonable demand, but I do think
it is a bit ideological, given that normal _programs_ are non-monotonic anyway
(because negation as finite failure is non-monotonic).
Now, I have to point out that both type-checking and errors are extra-logic
features of Prolog (there are no types in first-order logic and of course
there are no exceptions). In that sense, it's a bit arbitrary how a predicate
"should" be defined. The decision must be made in the context of Prolog, not
logic, and the needs of Prolog as a programming language.
In that context, the problem that I see with raising an error on "integer(X)"
is that integer/1 is performing a type-check in the context of the datatypes
recognised by Prolog. So it makes sense that it should succeed when its
argument is of the "integer" datatype and fail if its argument is any other
datatype. In this context "X" is not an uninstantiated term, but a term of type
"variable", where variable is a Prolog datatype, like "integer".
I think of it as a membership test. If the argument of integer/1 belongs to
the set of integers, then the query should succeed. If not, it should fail. A
variable is not in the set of integers, so the query should fail.
I also suspect that iterative deepening and, indeed, any kind of
nondeterministic program, would be made a lot more complicated if type checks
raise errors left and right (a problem you've highlighted with respect to
Prolog meta-interpreters and instantiation errors raised by clause/2 in another discussion we had here I think).
Personally I dislike errors: not only are they extra-logical, they force the
programmer to add all sorts of boilerplate to her program to handle them and
make for code that is harder to read and parse at a glance. The automatically
caught exception in Scryer Prolog that you list above is even worse to my
mind: it enshrines the boilerplate in the behaviour of the language (if I
understand correctly what is being done). So now errors are a first-class
citizen of the language. That's awful! I would much prefer a language that
helps the programmer identify and eliminate programming errors _before_ the
program gets to the point where it has to exit abnormally.
The nice thing when you have (such) errors is that if they do not happen you can be sure that the result is correct (in an appropriate pure subset). Without errors, you never know.
What I mean is, if:
A =.. [P,X,Y]
A = P(X,Y)
(I use "atom" in the non-Prolog sense, to mean a predicate symbol followed by terms in parentheses).
Edit: that said, Win-LPA Prolog allows P(X,Y), but that is a quirk of the interpreter and not standard at all.
The other meaning of type is the pure, declarative one. Here, variables do not belong to the valid types. In that context, atomic_si/1 and the other predicates fit in. Here, commutativity (modulo errors) holds. And thus alternative search procedures may be used.
(There is a third meaning similar to the second, but it is only standard related.)
Regarding pure, declarative types those again are extra-logical and so, in the context of a first-order logic language, they are arbitrary and conventional. You can say "it's better this way", someone else can say "no, it's better this other way" but in the end of the day there's nothing set in stone. Far as I can tell.
Anyway I really think all those little complaints about Prolog's lack of absolute, pure perfection are always bordering on nitpicky- and from the wrong side of the border, at that. Peter Norvig says Prolog is not a general purpose relational solver, but it's a general purpose programming language. Well, yes, because it was, from the start, designed to be a general purpose programming language! That is, a language that can be used for, you know, actual programming in first order logic. Of course it's not perfect. That has always created a rift in the community, with some people taking the issue more seriously than others. Some have always been happy that they (we) have a language that's 90% of the way to pure, declarative perfection and that lets you express yourself in code in ways that would be really painful in any other language (just think of DCGs vs. the pain that compiler writing is in anything else). Others are always unhappy because they're missing that idealistic 10% that remains on the road to clean and pure declarative perfection. Well, we're not in a pure world. We don't have perfect computers. In fact, our computers just suck. We have to accept compromises, if we want to get anything done. Prolog is almost perfectly balanced in between shitty languages that abuse imperfections in computer architectures and mathematical purity. Personally, I'm grateful to Colmerauer and Kowalski for creating it. I can only imagine the mess I'd make of it if I had to do it myself.
It's possible that X could be an integer, but Prolog can't do anything with that, so it just throws an error. I suppose an alternative would be to return 0 and then 1 and then -1 and then 2 etc but I have no idea why anyone would care.
And yet, there is a further possibility why not say `when(nonvar(X), integer(X))`. One system tried to go that path: Mu- and Nu-Prolog. However, the practical result was that an erroneous predicate just succeeded and you got an answer with tons of constraints (at that time in that context called floundering goals). A lot of research was put into detecting such flounders, but nothing came out of this effort that is in current use.
As far as modern Prolog. I know commercial systems still exist, but I assume most work has switched to SWI-Prolog?
Includes an evaluation license (free) and a personal license for EUR 165.
It's frickin awesome, logical + engineering = get things done
We just released an initial dependency tool for Cuelang based on go mods
If anyone is familiar with these topics, any pointers would be much appreciated!
They went to the effort of writing special microcode for the unifier, which meant the basically 16-bit 1100 series workstations were competitive in performance with other Prologs at the time.
In math logic, you learn propositional/sentential logic, then spend a lot of time with predicate/first-order logic. At this point, you can "return to mathematics" to do any math stuff imaginable under the sun. However, if you want to spend more time being a logician, you can delve into a wide variety of topics, second-order logics, constructive logic, model theory, proof theory, etc, etc, etc.
Computer-science/logic-programmers on the other end?
First they don't start with prop or pred logic. Absolutely zero building-from-the-ground up about how you could do computations using prop, then maybe move to more sophisticated stuff like pred logic.
No, they will throw Hoare logic at you, and Horn clauses, and relational databases, and bound and free variables, and cuts, and what not. And if you complain, I guess you're just not cut out for logic programming. (I guess hardcore full-time mathematical logicians are not cut-out for that either, including Godel mind you).
You think they'd stop there. I'm afraid not, far from it.
Next they want you to spend all your time about learning about Church and Turing, and Curry-Howard correspondence, and to pretend like "real" logic started in 1933, and 1936, because anything before lambda-calculus is too inferior to be used in computers.
Oh but wait, they're not done yet.
Now they want you learning a brand new foundation of logic, math, and CS, that even mathematicians hardly need or use for 99% of their work. They want you to think in terms of type theory, and category theory. (So I guess when they say 'brand new foundation' they really mean 'brand new pair of foundations from which you can pick').
I'm starting to think there is something seriously wrong with the way logic is approached in theoretical CS, and logic programming. It's not minimal, and appears to serve as an elitist gatekeeping device, while mathematicians are going about their daily lives producing spectacular results with just prop+pred logic, and category theory has only started getting used in a tiny fraction of mathematical work.
Systems like Prolog are based on the idea of "proof search": given a logical statement, find a proof of that statement. Humans do this all the time with classical logic, you might say -- but then, there are very famous statements expressible purely in first-order logic that nobody has found a proof of. This is anathema to computing! We want an "effective method", a "decision procedure", that allows us to compute for finite time and terminate with a positive (or negative!) proof. And we know, thanks to Gödel, that no such procedure exists for first-order logic: any decision procedure will miss proofs for some first-order statements.
In order to give ourselves more options for proof search, we need to enrich our logics somehow. This enrichment usually comes in the form of restrictions relative to classical logic. For example, Horn clauses are a fragment of classical logic that we _can_ assign a reasonable decision procedure to -- of which the most basic is probably SLD resolution. But these restrictions traditionally sacrifice more power than we'd like, so we add various things back in, trading computational difficulty (but not decidability!) for expressivity. This is a significant source of complexity, no doubt!
Computational logic is hard because we're asking more of it. Classical logic has been around since at least Aristotle, so it's perhaps no surprise that it's been refined into such a simple, well-understood form. Even then, I think if you traveled into various classical modal logics, you'd find those just as inscrutable at first as the logics we use in logic programming.
> Hoare logic
Mmh... For the most part, Hoare logic is back on the "for humans" side of things, as it's about verifying software rather than implementing software. It brings to bear the full power of classical logic(1), but since you're applying it to programs, most of which have incredibly non-monotonic semantics, you have to consider truth at each point in the program. Most of the machinery in Hoare logic over classical logic is about relating all of those points -- all of those worlds, in a model-theoretic sense.
I personally find Hoare logic easier from a mathematical reasoning perspective than any of the logic systems designed for implementation. I think that makes sense, though, because Hoare logic is not about automation.
(1) You can instantiate Hoare logic over any particular logical system, so you could apply it for automated verification, but I don't think that's the case the parent was considering.
Think about implication in propositional logic. Take two entirely unrelated propositions a and b. According to this logic either a → b or b → a is true. This, btw, cannot happen in Prolog.