Hacker News new | past | comments | ask | show | jobs | submit login

The are examples in the wiki, but there are no references to that in the repo README. The authors should fix that.

This is the hello world

    :- module hello.
    :- interface.

    :- import_module io.

    :- pred main(io::di, io::uo) is det.

    :- implementation.

    main(!IO) :-
        write_string("Hello, world!\n", !IO).
My first reaction is why that :- at the beginning of the first lines? I'm sure there are pretty good reasons for it, probably because of the history of this lineage of programming languages.

That's fine if the authors are interested only in developers from that background. If they want to expand to the general developers community they should adopt a more mainstream syntax which will ease onboarding. Elixir gave a nice lesson about that, by giving Erlang a Ruby like syntax and onboarding many developers from non functional languages. Then Elixir is still Erlang and functional.

Probably the interpreter/compiler can do some more work and be able to understand what module, interface, import_module, pred and implementation are without the :- prefix. To my eyes that :- is only noise, I don't need it to read and understand the program. The language should help me, not the other way around.

Other points along the same line:

* The ! before IO reminds of the not operator in other languages and I wonder what it means. I should read the documentation but it feels strange.

* The :- again after the function definition. Other languages have either simpler characters to type (Ruby and Elixir have do, Python has :, C and Java have { ).

* The . at the end of statements. From other examples it seems that Mercury uses it exactly as Erlang does (one(), two(), three().) Elixir managed not to use neither commas nor the full stop. It uses do end instead ({} is more popular). Python use a different approach which I don't like but it's popular too.




Mercury's roots are indeed the explanation and justification for many of these points: Mercury syntax is deliberately designed to be very close to Prolog syntax. In fact, it is so close to Prolog that if you remove the declarations, then many Mercury programs are also valid Prolog code.

Regarding ":-", there is a difference between writing:

    :- implementation.
and

    implementation.
The latter is a fact and can be read as "implementation holds". The former is a directive that tells the compiler something.

As in Prolog, :- is an ASCII transliteration of ←, i.e., an arrow from right to left. A clause of the form:

    Head :- Body.
is read as: "If Body is true, then Head is true.". Such clauses are sufficient to describe every known computation.

In Mercury, !Var is a state variable. It is a shorthand for naming intermediate values in a sequence. Without using state variables, you can write the code equivalently as:

    main(IO0, IO) :-
        write_string("Hello, world!\n", IO0, IO).
State variables are more explicit than using DCG notation, and at the same time still easier to write than pairs of variables that are threaded through.

The "." at the end of clauses is likewise inspired by Prolog syntax.

Please note that Prolog programs are, syntactically, also Prolog terms, and "." is used at the end of each clause.


>> Please note that Prolog programs are, syntactically, also Prolog terms, and "." is used at the end of each clause.

Heh. You won't believe how much trouble I got into for using Prolog terminology like "term" and "predicate" when speaking to logic programming people, even though they all know Prolog perfectly well.

You can probably imagine why- in FOL, and so in logic programming, a "term" is a variable, a constant or a function, and a "predicate" is a concept. In Prolog, a "term" is - well, everything! And "predicate" is either a clause or what LP people call an atom. Oh, and, btw, an "atom" in Prolog is either a predicate symbol, a non-ary function, or a constant, whereas in LP it's a predicate symbol followed by a vector of terms in parentheses.

How all this terminological confusion came about, I don't know, but I'm sure suffering because of it, and because I first learned Prolog, before really studying logic programming, as such.


The terminology of first-order logic is indeed quite different from that of Prolog. A key reason for this is that in FOL, we have to distinguish between the syntactic entities and their interpretations. So, in FOL, we have predicate symbols that are assigned to relations by an interpretation, we have constant symbols that are assigned to domain elements by an interpretation, we have function symbols that are assigned functions by an interpretation. In addition, many theorems also require that we distinguish between countable and uncountable signatures.

In contrast, Prolog is confined to a single kind of interpretations, namely to Herbrand structures, where each term "stands for itself", and where we only need symbols that already occur in the clauses we are considering:

https://en.wikipedia.org/wiki/Herbrand_structure

So, in Herbrand structures, the interpretation of a term T is simply the symbolic representation of T, i.e., "T". Importantly, one can show that a set of clauses has a model if and only if it has a Herbrand model.

This also has important semantic consequences: For example, due to the compactness theorem (and its consequence, the upward Löwenheim–Skolem theorem), one can show that important concepts such as the transitive closure of a relation, the class of connected graphs, and the natural numbers with only the intended model (up to isomorphism) cannot be expressed in first-order logic!

https://en.wikipedia.org/wiki/Compactness_theorem

Yet, we can readily express these things in Prolog, and in fact almost every Prolog program also does that or needs them!

Why? Because Prolog (implicitly) only considers least Herbrand models, so we do not need this layered syntactic view, and also do not run into these semantic issues when programming in Prolog.

An argument can be made that in this case, what at first seems like a restriction in fact extends the usefulness of the language. A similar argument can be made also for several constructs within the language.


While Prolog’s operational semantics are given in terms of least Herband models (at least, before you consider negation), you can also view a Prolog program as a subset of FOL and give a traditional model-theoretic account of the semantics. This may be less useful practically, for the reasons you give, but sometimes it’s nice to consider that a symbol “fred” might actually refer to a person called Fred rather than just be symbols all the way down...

(It always surprises me when Prolog texts refer to the least Herband model as the “intended model” - it’s definitely not what I intend when I write Prolog!)

So while we can say that an “ancestor” relation in Prolog correctly defines transitive closure within the least Herbrand semantics, at some point we actually have to relate that “semantics” to the real world if we want to know what our programs mean.

Edit: putting this all another way, my understanding of (pure, positive) Prolog’s operational semantics is that the least Herbrand model contains exactly those sentences that are true in all models of the program, whether those models are Herbrand or otherwise. So it’s not quite true to say that Prolog only considers the least Herbrand model, but that it may as well only do so because the (operational) result will be the same. It’s been a while since I’ve looked at this any great depth though.


>> (It always surprises me when Prolog texts refer to the least Herband model as the “intended model” - it’s definitely not what I intend when I write Prolog!)

Good point.


What you call an interpretation I know as a pre-interpretation (from Foundations of ILP by Nienhuys-Cheng and de Wolf which is my go-to textbook; I think you're going by Lloyd?). A pre-interpretation assigns objects in the domain to terms (including variables!), and an interpretation assigns truth values to atoms of predicates. This distinction is perhaps a bit unwieldy- but, how do you call a truth assignment to predicate atoms, if you call the mapping between entities in the domain of discourse and the terms in the language an "interpretation"?

Or is that resolved by having a Herbrand interpretation?

>> Yet, we can readily express these things in Prolog, and in fact almost every Prolog program also does that or needs them!

Yes but Prolog cheats: it allows any (Prolog) term as an argument to a (Prolog) predicate, including another (Prolog) predicate, as in meta-predicates. Meta-predicates then are not strictly speaking first-order, and so Prolog is not strictly first-order.

Or more to the point, Prolog doesn't make a distinction between constants and predicate symbols, or predicate atoms and functions. So it accepts as arguments to (Prolog) predicates (Prolog) terms that are syntactically identical to functions but have predicate symbols instead of function symbols. In FOL, the sets of predicate symbols and function symbols are disjoint, so you can't do that.


I used interpretation as it occurs for example in Computability and Logic by Boolos, Burgess and Jeffrey. It is also explained here:

https://en.wikipedia.org/wiki/Interpretation_(logic)#Interpr...

Please note that from a logical perspective, a (pure) Prolog program is always a set of first-order clauses, since all variables that occur in a Prolog program are implicitly quantified to range over individuals and hence terms, not over predicates or sets of predicates etc., which would be characteristic of second- and higher-order logic.

This also includes meta-calls and meta-predicates! This is because, in Prolog, if a variable P occurs as a goal anywhere, it is interpreted as call(P), i.e., a universally quantified variable that is the argument of the predicate call/1. And call/1 is expressible in first-order logic, using at most one clause per predicate:

    call(true).
    call(X = Y)    :- X = Y.
    call(dif(X,Y)) :- dif(X, Y).
    call(X #= Y)   :- X #= Y.
    etc.
So, even though Prolog programmers call predicates involving meta-calls "higher-order" programming (since predications are passed as arguments), it is not an instance of higher-order reasoning from a logical perspective. An example of a logical higher-order statement must quantify (at least) over predicates, so for example "For all unary predicates, the induction axiom is true." There is no way to express this in Prolog, because its universally quantified variables implicitly always range exclusively over domain elements (Herbrand terms), and you cannot quantify over predicates.

Regarding all-solutions predicates like setof/3, which are also higher-order predicates in the sense of Prolog: These predicates are not monotonic (for example, adding a fact may cause a query to fail that previously succeeded), and they therefore do not correspond to predicates in classical logic, where entailment is monotonic.


To add to triska's excellent explanation, the thing to understand about the syntax of logic programming languages is that it's not something arbitrary that someone thought up, like the syntax of most other programming languages. Their syntax is in fact that of First Order Logic.

In Prolog and similar languages, everything you see in a source file is a Horn clause, a clause (a disjunction of literals) with at most one positive literal. The following are Horn clauses:

  ¬p ∨ ¬q ∨ ¬t ∨ u
  ¬p ∨ ¬q ∨ ¬t 
  u
Horn clauses are typically written as implications, with the positive literal at the start of the clause, followed by the left-facing implication arrow:

  u ← p ∧ q ∧ t
  ← p ∧ q ∧ t
  u ←
In Prolog, Mercury etc languages the left-arrow is rendered as ":-" and the comma, ",", is used as the conjunction operator. The full-stop is used to terminate a clause:

  u :- p, q, t. (1)
  :- p, q, t.   (2)
  u.            (3)
Horn clauses with exactly one positive literal, like (1) above, are called definite clauses, or "rules"; clauses with no positive literals (2) are called goal clauses, or directives; and clauses with no negative literals (3) are called unit clauses, or facts. The literal before the :- is called the head literal and the rest are the body literals.

The structure of a clause denotes its semantics. To simplify (and avoid a lengthy exposition of Prolog's refutation theorem proving) a rule can be proved true or false, a fact is always true and a goal is executed when it's encountered. The queries entered at the REPL to execute a program are also goals (that match a rule's head, and are proved by refutation).

This is all part of the appeal of Prolog, Mercury, etc: that everything is the same kind of object (a Horn clause). It's like having the most general LEGO® brick possible, general enough to build anything that can be built.

In Erlang, the syntax remains, but the FOL semantics are gone, and perhaps that's what drives you to think that some of the syntax above is unnecessary or odd. In truth, the syntax is the semantics of the program (which is, itself, a proof). You could, of course, change the syntax to make the language look more familiar- but then you'd lose the benefit of having a programming language that has the syntax and semantics of FOL, which would be a shame.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: