Hacker News new | comments | ask | show | jobs | submit login
The Power of Prolog (metalevel.at)
480 points by tosh 8 months ago | hide | past | web | favorite | 158 comments

Thank you very much for the publicity, I greatly appreciate it!

This book has been discussed on HN about one year ago:


Since then, I have added the following new chapters:

Prolog Business Cases: https://www.metalevel.at/prolog/business

Sorting and Searching: https://www.metalevel.at/prolog/sorting

Cryptography with Prolog: https://www.metalevel.at/prolog/cryptography

Engineering Aspects: https://www.metalevel.at/prolog/engineering

Artificial Intelligence with Prolog: https://www.metalevel.at/prolog/ai

I have also made many improvements and additions to the other chapters.

Also, I have made almost everything available in a public git repository so that you can track the book's progress and view it offline:


It is my sincere hope that you find this material useful, and I welcome all suggestions and comments.

Also, thank you very much to all readers for the extremely constructive feedback that I have received in the past year. Your comments have made this work especially worthwhile.


I've shared this with anyone who will let me divert the conversation to Prolog. At work we have the Haskell folks, the Lispers who move everything towards Lisp, and I'm the one who moves every conversion towards Prolog, then I send the link to this. :-D

Thank you for sharing this, I really appreciate you participating in this thread and this awesome anecdote!

One related workplace anecdote that a former fellow student told me: He was working on a rule-based medical recommender system, and he recognized that Prolog was an excellent fit for this project. When he first suggested that the team switch to Prolog, they were very skeptical. But someone high up in the chain was a great fan of Prolog and approved this suggestion. A few weeks later, they had completed the implementation, and the Prolog-based solution was running. My colleague told me that, a few months after that, the whole team regarded Prolog as the only viable solution to all further projects from then on!

> My colleague told me that, a few months after that, the whole team regarded Prolog as the only viable solution to all further projects from then on!

Hm, there is such a thing as the right tool for the job. If all their jobs were very similar that makes sense, otherwise, not so much.

Are you aware of the fact that inside every Windows NT installation there was a Prolog engine to help with network configuration?


Personally, I found this the fun aspect of this story: that what one regards as "the" (as opposed to a) right tool for the job can change so quickly, and can so strongly depend on one's perspective!

Thank you for the link to the Prolog-based network configuration!

The older Lisp folks often have a Prolog integrated in their system...

Actual Prolog. Allegro ships a fully integrated Prolog as part of their product.

Now that you say that i think both Allegro and Lispworks both have that...neat!

PicoLisp has one.

If you are talking about the Pilog "Prologue Engine" it really is not the same thing at all as a proper Prolog.

Oh, I was honestly not aware. Now I know to look for differences. You don't have to, but could you give some pointer or clue as to why PicoLisps prolog engine is not a proper Prolog?



And the Haskell folks have list application and filtering that is equivalent to the Prolog search algorithm.

> I send the link to this. :-D

And what kind of reaction are you getting? As I just wrote elsewhere, I don't think this is useful for complete Prolog beginners to learn the language or even get a feel for it. Has anyone you sent this to come back with "that's neat, I read the first N chapters and have just written my first Prolog program on my own"?

Yes, but we are stuck in our ways. We like all sorts of languages, some for instance talking enough about J got me learning J. Someone else not shutting up about rust got me looking into rust. Me not shutting up about golang got other's to check it out. I got the entire company on typescript instead of javascript. :-) Our normal tech stack is pretty boring and safe. But I'm glad there's a few folks who are open minded and trying something else. No one has used prolog in production that I know of but me. I used it to validate some business rules.

Where do you work? Sounds neat!!

It sounds like a level of Hell that Dante forgot to mention.

Malbolge maybe?

what makes you feel that way?

I studied some Prolog and Haskell in the undergraduate days. Well now my work is mostly done in Java, e.g Android.

What are strong cases to use Prolog/Haskell these days? :D

Prolog-ers and Lispers are friends <3

ebony and ivory / live together in perfect harmony


Where do you work that you get so little imperative languages?

A section on dynamic programming w.r.t. to modern AI techniques would complement it nicely considering the resurgence of the field :D

Thank you, this sounds like a wonderful addition!

In general, the AI chapter is what I am currently working on most. I mean not "only" on the actual content, but on the applications I want to present in this chapter.

Anton Kochkov has recently motivated this chapter via a dedicated issue for AI/ML applications of Prolog, and I invite you to track its progress at:


Hi Markus, thanks for your hard work on this website (also on all the Swi constraint libraries).

For your machine learning section, you might want to consider discussing Metagol [1], a modern ILP system that can learn recursive theories and perform predicate invention.

The core Metagol implementation is tiny and relies on a meta-interpreter (the technique is called Meta-Interpretive Learning). It is a radically different approach to previous ILP systems but much simpler and easier to understand. It should be interesting to anyone who wants to do machine learning with Prolog.


[1] https://github.com/metagol/metagol

Thank you very much Stassa!

I have added Metagol to the AI chapter, please have a look.

That looks great, cheers! :)

Very nice book. Do you know what companies are using prolog commercially?

Thank you for the kind words!

Regarding companies that use Prolog, here are a few examples:

A third of all airline traffic is handled by systems that run SICStus Prolog:


A large portion of the New Zealand stock exchange is powered by Prolog:


Oracle uses Prolog in the JVM specification:


IBM has used Prolog for natural language processing in Watson:


RedPrairie uses SICStus Prolog for optimization tasks.

Prolog is used for voice processing on the International Space Station.

The Rust compiler's semantic rules are being recast internally in a prolog style.


Windows NT used to have a Prolog based system for handling interrupts, not sure how much of it is still relevant on W10.

I was pretty sure that it was for configuring network devices.

That link just shows a proposed third party IRQ config system for DOS, but wasn't actually shipped in Windows (in addition to the NT networking config).

Unbelievable. Often wondered how a prolog friendly os would feel..

>A third of all airline traffic is handled by systems that run SICStus Prolog

But are those systems responsible for all the I.T. problems that are plaguing the industry?[1] (I've been under the impression the issues were primarily with COBOL/mainframe systems, but wasn't aware Prolog was so widely used by airlines).

[1] https://www.google.com/search?hl=en&q=airline%20computer%20f...

Yes Markus does know, and has sprinkled examples throughout the book! One need look no further than the first chapter to find them.

One that was new and interesting to me is that gerritcodereview (by Google) is written in Prolog.


It's written in Java but embeds prolog as a rules engine for deciding when a change may be submitted.


Gerrit (the code review tool) uses Prolog internally

* https://gerrit.googlesource.com/prolog-cafe/

I know that Negota Logic uses Prolog for the reasoning in their Legal Tech app (which is mainly written in Java). Since the app itself has a terrible user experience and looks like stuck in a ninties dystopia, there might be better examples.

I think that reinforcement learning might be suitable in prolog. Do you have an opinion on that?

In general, my opinion regarding machine learning and, in fact, also other tasks, is that Prolog is no less suitable to implement or teach these approaches than - for example - Python, which is currently often used in these areas. Dedicated libraries for machine learning are available and can be written for Prolog as well as for any other language, and Prolog of course also lets you easily interact with other programs via pipes and sockets.

So, yes, I agree: Prolog may well be suitable for this! In practice, for statistics-based methods, the core logic will likely be carried out by dedicated hardware which you may have to program in its own language. Prolog may still play a part also with such approaches, for example to generate test cases and correctness certificates, and as a rule-based safety measure to prevent certain situations from happening.

One of the coolest things I've seen is to use Prolog with CLP(FD) to solve the 7-11 problem. The problem basically says the sum of the prices of four items is $7.11, and the product is $7.11 too (no rounding); find the prices of these four items.

This can be solved in two lines of code that gives the (unique) solution in a second. Not even my expensive Mathematica can do this!

    ?- use_module(library(clpfd)).
    ?- Vs = [A,B,C,D], Vs ins 1..711, A * B * C * D #= 711000000, A + B + C + D #= 711, A #>= B, B #>= C, C #>= D, labeling([ff, down], Vs).
    Vs = [316, 150, 125, 120],
    A = 316,
    B = 150,
    C = 125,
    D = 120 ;
Another great thing Prolog is good at is type inference. After all, type inference, in its simplest form, is just syntax-directed constraint generation and then using unification to solve constraints—exactly what Prolog gives you by default. You can write a type inference engine for simply typed lambda calculus in Prolog in 10 minutes. Google used to have a project to do type inference for Python written in Prolog, although they've since [switched away](https://github.com/google/pytype).

Hah! I used exactly this example to motivate the arbitrary precision CLP(FD) implementation, published as The Finite Domain Constraint Solver of SWI-Prolog, FLOPS 2012, LNCS 7294:


The observations on type inference are also spot on. The project you mentioned was previously discussed on HN, but the page is no longer available:


Oh you implemented this CLP(FD) and wrote that paper! Thanks a lot! My 7-11 example is probably because I first saw it in your paper then.

This reminds me of a recent HN post about a simple datalog engine being used in the rust borrow checker.




But, but, the product of four prices cannot possible be measured in $. You must mean $^4.

Edit: Here's a solution in Haskell. I've been paid to program in both Prolog and Haskell, but it was a while ago; my knowledge of both languages is very rusty.

  main = putStrLn $ show [(a, b, c, d) | a <- [1..711], b <- [a + 1..711],
         c <- [b + 1..711], d <- [711 - a - b - c], c <= d, a * b * c * d == 711000000]

Amazingly, this gives an answer within not too many tens of seconds of full CPU usage. (I interrupted it before it finished searching the full space, though.)

Ooo! Lemme show off this CLP(FD) solution to the "Zebra Puzzle" https://en.wikipedia.org/wiki/Zebra_Puzzle that I made after reading the "Logic Puzzles with Prolog" chapter:


That makes Prolog look downright useful! I heard from lots of people that it's fun as an esoteric language and some had a homework assignment or two in it, but nobody really uses it in practice. Do you think this might be more useful as a module to solve logic puzzles, rather than as a complete language by itself? I learned the very basics, but not worked with it enough myself to say whether that would be better than keeping it as a stand-alone language.

Yes, a modern Prolog with CLP(FD) is very well suited to solve logic puzzles. There are many examples on the web, but you can look at this book on ECLiPSe CLP (nothing to do with the IDE, it's a Prolog system with good constraints support): http://www.anclp.pl/

However, nowadays I'd start with Minizinc and its default solver gecode: http://www.minizinc.org/ For combinatorial puzzles it's even easier than Prolog and very efficient. Just read the short tutorial and you should be good to go solving the Zebra/Einstein problem or similar: http://www.minizinc.org/downloads/doc-latest/minizinc-tute.p... Minizinc is purely declarative. When more control is needed then you can use Prolog.

Well, the idea that "nobody really uses it in practice" doesn't hold up to the evidence in this thread, eh? It's used and under active development. I think it lacks hype. :-)

> Do you think this might be more useful as a module to solve logic puzzles, rather than as a complete language by itself?

I was thinking about it this morning and I came to the conclusion that Prolog really is AI. They did it. We have AI already and most of us just failed to notice. The Unification algorithm at the heart of Prolog (I'm not an expert!) is pretty much what your brain does when you solve logical puzzles and problems. We have a machine that can think.

The way I think of Prolog is as a super-calculator that can be applied to a huge range of problems and tasks. (One thing to be aware of is that if you have a Prolog-shaped problem and try to solve it in some other language you'll likely wind up reinventing some portion of Prolog in that language! If you look up solutions to the Zebra Puzzle written in other languages you'll see that most of them proceed by implementing some sort of logic solver that resembles a crude Prolog.) There may be practical real-world issues in performance or interfacing for a given application, but in general I'd say that good ol' Prolog is vastly underutilized. (Put another way, we're working harder than we should be.)

> The Unification algorithm at the heart of Prolog (I'm not an expert!) is pretty much what your brain does when you solve logical puzzles and problems.

What makes you think that? I don't think we have a clear idea of what the brain does when doing "intelligent" things, but syntactic unification is very probably not it. For whatever it's worth, the CLP(FD) stuff you posted isn't "just" unification, it needs additional powerful constraint solving algorithms behind it to work as well as it does.

All that unification does is solve simple equations between terms without knowing anything about what is going on:

    ?- f(X, b) = f(g(a), Y).  % query
    X = g(a),                 % solution
    Y = b.
This is nice, but it's not exactly intelligence.

Yay for Prolog enthusiasm! I like calling it "magical" too. But let's not go overboard with grand AI claims.

> What makes you think that?

Direct observation.

I learned unification algorithm from studying Matthew Rocklin's Python Kanren implementation [1]. The next time I was thinking out the solution of a programming problem, as I was witnessing my brain (or mind if you prefer) solve the problem, I realized on a "meta-track" that what it was doing was exactly unification. I actually "saw" it happening and had a context to recognize and name it. I think it helped that I was working in a domain that has models very close to the forms (computer programming) but the rational logical thinking process is the unification algorithm.

> I don't think we have a clear idea of what the brain does when doing "intelligent" things

Well, I don't mean to suggest that rational thinking is the only kind of thinking! I'm just pointing out that it's a kind of thinking that really has been automated. It's AI. (Not all that AI is or ever will be, but it counts.)

Turing essentially made concrete the thought process of solving (some kinds of) math problems, noticed that the "atomic elements" of the process were a finite set, and designed a machine to automate those steps, so in a sense, computers have always been AI.

> CLP(FD) stuff you posted isn't "just" unification, it needs additional powerful constraint solving algorithms behind it to work as well as it does.

Yes, and the brain has powerful sensory systems attached to it. I think science has identified something like twenty-seven different human senses. These all supply "finite domains" to the reasoning system. The sensory systems are very active, and can function in "virtual" mode (aka imagination) to permit a kind of thinking by simulation (analogs.) Animals have this sort of thinking as well (that's why they can dream.)

The linguistic/digital system reifies this fast primary mode thinking into slower rational thinking distinctions and reasoning, which is then used to build coherent models of the world which are then used to generate behavior. [2] If you codify that thinking process you get the unification algorithm.

[1] https://github.com/logpy/logpy

[2] ...and behavior generates new sensory inputs... The whole thing constitutes a feedback loop linking the real world with your subjective experience. Linguistic thinking permits direct modeling of recursion, which permits thinking about open-ended loops in behavior in a concrete way, which permits self-referential modification of the reasoning structures, enabling rapid innovation in behavior (aka "culture".) You're a cybernetic process that can become a Gödel machine.

This Python code does a full search and finds the unique solution instantly:

    p = 711
    q = 711000000
    for a in range(1, 1 + p // 4):
      if q % a == 0:
        for b in range(a, 1 + (p - a) // 3):
          if q % (a * b) == 0:
            for c in range(b, 1 + (p - a - b) // 2):
              d = p - a - b - c
              if a * b * c * d == q:
                print (a, b, c, d)
Does Prolog do the same pruning? Alternatively, it could be dumber (omit some of the pruning that my code does) or smarter (start by factoring 711000000 or something).

You could write the same solution in prolog. The difference between yours and the original solution is the latter follows the original program specification and is easier to compare to it. That is, it defines what the problem is, not how to solve it.

Prolog doesn't do that automatically, but you can write the same thing in Prolog too:

    xkcd_ugly(A, B, C, D) :-
        P = 711,
        Q = 711000000,
        Max_A is 1 + P // 4,
        between(1, Max_A, A),
        Q mod A =:= 0,
        Max_B is 1 + (P - A) // 3,
        between(A, Max_B, B),
        Q mod (A * B) =:= 0,
        Max_C is 1 + (P - A - B) // 2,
        between(B, Max_C, C),
        D is P - A - B - C,
        A * B * C * D =:= Q.
In SWI-Prolog, I get the following timings:

    ?- time(xkcd_ugly(A, B, C, D)).
    % 383,992 inferences, 0.056 CPU in 0.056 seconds (100% CPU, 6825328 Lips)
    A = 120,
    B = 125,
    C = 150,
    D = 316 ;
    % 3,274 inferences, 0.003 CPU in 0.003 seconds (100% CPU, 1229464 Lips)
That is to say, it found the first solution in 0.056 seconds CPU time, then took an additional 0.003 seconds to prove that there are no further solutions. (This is the minimum of a handful of runs, there is quite a bit of variation.)

Quick and very dirty benchmarks on the same machine run your Python code in about 0.02 seconds. The CLP(FD) code above takes about 0.1 seconds but it's a lot clearer.

Thank you! That makes sense.

I hated this language (and I'm saying it knowingly, despite the fact that this is most definitely a Prolog loving group). I found its working model difficult for me and the ROI quite low. I wanted to make my critique a bit constructive though and, before spitting out something of my own, I decided to look if there's already anything available in this regard. I think Andre Vellino puts it quite well (or at least better than I cold) in his "Prolog's Death" blogpost[1]. I invite you to have a look at it. Leaving aside Prolog's language design decisions, logical languages in general may be for now just a bit precocious, something not meant to flourish in our time. However, the computing hardware evolves and with it getting more complex and harder to predict the imperative paradigm being better fit for the underlying physical architecture becomes less and less true with every passing day. Although I can't say I'm thrilled about it, in time the logical languages may become the better offer for the average Joe looking for quick results, something closer to what for now Python is.

[1] https://synthese.wordpress.com/2010/08/21/prologs-death/

Quote from the link: "The most confusing thing about Prolog is that, whatever algorithm you implement with it must be on top of the built-in ones, namely depth-first search, and unification (and only using recursion rather than iteration). "

When I discovered that during my studies I decided that Prolog is not for me. First it looks neat for toy problems and then you're fighting the solver until you give up.

Its so nice to hear Prolog is alive and well. While at university I spent quite some time wrangling with it and grew to love it. So much problem solving smarts built into the runtime.

Here is my crowning achievement from back then in 1993 - a genetic programming simulator that evolved ants to follow pheromone trails in pure prolog.




In 2005 I used Prolog to a solved pathfinding for competive maps in CS 1.6 based off weighted examples form interviewing a 'professionals' calls, even had a plugin architecture to extend each map, though it only worked well for Dust2 before I had to present. Way more practical. Cough.

That was fun. Now I want to find the code...

Prolog: just say no.

(To explain the joke, one of the main problems with Prolog is that solutions which cannot be unified just emit a "no" with no further explanation, making large Prolog setups hard to debug)

I think the usual (and funnier) version of that joke is:

    Q: How many Prolog programmers does it take to change a lightbulb?
    A: no

This is a nice coincidence.

A few weeks back a guy at work gave away a box of old computer books, and I picked up several on Prolog and logic programming.

I've been reading through an ancient edition of "Programming Prolog", which is good, but I've had a hard time finding modern code samples. The code in the book mostly works, but I've seen some weird behavior that I suspect is due to language changes. It'll be nice to have these up-to-date examples using modern idioms.

One of my favorite languages, pity that is has had even harder time than Lisp getting mainstream acceptance.

I once asked my Prolog instructor why this all takes so long. His reply: "How well are you prepared for the next ISO meeting?"

At that point, I realized that if you prepare yourself in earnest for, say, the standardization process, or even to accomplish any concrete improvement in Prolog systems (such as better indexing, more advanced constraints, faster garbage collection etc.), then there is so much to do that there is barely enough time to do it properly!

Hence, I think it is fair to say we are are progressing as fast as we can, and such complex projects as building a full-fledged Prolog system inherently take an amount of time that we are not yet fully accustomed to. Prolog systems have been in development for many decades and are still being improved all the time. In my view, they are only now becoming truly interesting, with many great features such as multiple argument indexing, tabling and expressive constraints.

it sounds like ISO standardisation was a mistake?

When I was at the university, the ISO was freshly baked, what mattered was SWI and SICStus compatibility.

Pascal's issues regarding industrial use and poliferation of dialects were mostly fixed with ISO Extended Pascal, other than GNU, everyone else ignored it and focused on Turbo Pascal/Object Pascal compatibility instead.

Ruby and Eiffel happen to have ISO standards, but I bet even developers on those communities don't know it.

For a long time whatever ISO SQL was had not so much to do with what RDMS were actually providing.

ISO only matters if it is something vendors actually care about.

Let's face it, procedural is just plain easier to debug because you can more easily split it into digestible and dissect-able chunks: divide-and-conquer. Perhaps functional and logical CAN readily have such features, but nobody either figured how, or have not figured out how to explain how to do dissection to normal people. (This is sometimes a criticism of SQL also, but the added WITH clause helps break up big queries into multiple smaller ones.)

For example, with a procedural loop you can stick a debug statement in the middle to look at just about anything:

  while (x=foo(y,z)) {
    a = doStuff(x, z);
    console.write("x={?},y={?},z={?},a={?}", x, y, z, a);

No idea why you think functional programming can't do printf debugging. Even pure languages like Haskell have `unsafePerformIO` (or wrappers like https://hackage.haskell.org/package/base- ).

Also functional programming heavily encourages splitting up code into digestible chunks (they're called functions ;) )

I guess humans, being the temporal creatures we are, have an easier time debugging procedural programs because they have a "story", an expected sequence of "events" (calls, reassignments, etc.), that naturally goes with the program. Functional programming inherently obscures that.

Prolog on the other hand has a very clear imperative reading, that fits alongside the declarative one. In fact, this is probably one reason why many programmers used to imperative languages find it hard to pick up Prolog and run with it - they get distracted by the imperative reading.

On the other hand, that makes it much easier to debug code, by thinking along the lines of "p/2 is called after p/4" or even "p/2 returns A that is passed to p/4" (even though strictly speaking predicates don't "return" stuff).

"Frogs might like go-to's": Joke from a heated debate about "go to" statements regarding whether go-to's are objectively bad, or it's matter of something in the human mind.

I don't know if that's the case. It can certainly be tricky to "unlearn" this instinct when trying a language like Haskell after being immersed in imperative languages, but I'm not convinced that either is 'more natural' than the other.

For example, it's easy to forget how utterly baffled a learner can be when faced with:

    x = x + 1

I don’t have Haskell experince but I could imagine that traditional debug experience is a bit different if you have lot of functions as values. It’s at least more challenging to visualize function which is composed from 4 another ones and have maybe partially aplied parameters VS map of strings. Or how this goes?

Ps: I currently prefer my .NET as F# and so on but debuggability doesn’t feel biggest strength indeed altough it’s usually ok.

Although debugging in haskell typically feels different than procedural or OO languages (much less state to rely on, so debugging is used much less to understand what happens, and much more to control that your next logical step checks out), grandparent's assertion that you can't printf simply lacks merit.

I did not say "can't do". I essentially said "not as well". Of course "well" can be subjective, but if enough people don't like way a given language/tool does it, they will ignore it, like they have been with FP/logical programming for several decades. Somebody needs to articulate in careful unambiguous detail how and why it's "better" and "easier", not just repeatedly claim it. That writer apparently hasn't been born yet.

You seem to be moving the goalposts to "why is FP better than X". I was simply pointing out that printf debugging is easy in FP. In fact, it's probably easier in FP, since everything is an expression, whereas imperative languages have a weird expression/statement distinction. For example, if I have code like:

    buggyCode x y = if foo x
                       then bar x y
                       else baz y
I can wrap anything on the right-hand-side in a printf (except the keywords if/then/else). At the extreme end I could do:

    buggyCode x y = trace "hit buggy code" (if trace "applying foo to x" ((trace "hit foo" foo) (trace "hit foo's x" x)
                       then trace "applying bar x to y" ((trace "applying bar to x" ((trace "hit bar" bar) (trace "hit bar's x" x))) (trace "hit bar's y" y))
                       else trace "applying baz to y" ((trace "hit baz" baz) (trace "hit baz's y" y))
I could even move the x and y arguments across to the right-hand-side using anonymous function notation, then I can trace partial applications too:

    buggyCode = trace "hit buggyCode" (\x -> trace "gave x to buggyCode" (\y -> trace "gave y to buggyCode" <previous right-hand-side goes here>))
Note that we the built-in if/then/else is mostly a legacy crutch to aid familiarity. We can just use a function instead, and printf all the things:

    ifThenElse True  x y = x
    ifThenElse False x y = y

    buggyCode = \x -> (\y -> (ifThenElse (foo x)
                                         (bar x y)
                                         (baz y)))
(Of course, defining our own if/then/else isn't much use; we're usually better off writing more meaningful, domain-specific alternatives)

That's another thing: "high-meta" languages allow individual developers to re-invent the flow-control wheel in their own image (personal preferences). Consistency is often more important for team readability than factoring (compact code). One man's "clever" can be another's spaghetti code. What works well for individual projects often doesn't scale to larger and often-changing teams.

That argument can certainly be applied to Lisps, with their extensive macro systems, but that's mostly orthogonal to programming paradigm. Lisps aren't necessarily functional: Common Lisp is highly imperative, and even Scheme is 'the ultimate imperative' https://dspace.mit.edu/handle/1721.1/5790 !

As for that `ifThenElse` example, there's nothing "meta" going on at all and there's no fancy control flow: it's just a (very simple) function. Consider a function like:

    renderPage User  page register = page
    renderPage Guest page register = register
Doesn't look like 'clever spaghetti' to me: we branch between 'User' and 'Guest', returning a different one of our arguments for each. Yet this is identical to that 'ifThenElse' function, except for the names (which are irrelevant to the machine).

This is what I meant by writing "more meaningful, domain-specific alternatives". Not to "re-invent control flow", but to avoid talking about control flow, and just use functions.

Note that we can do pretty much the same in any language, e.g. Python:

    def ifThenElse(c, x, y):
      return {True: x, False: y}[c]

    def renderPage(session, page, register):
      return {User: page, Guest: register}[session]
Nothing "meta" or "re-inventing the flow-control wheel" there; just some functions. Of course, this isn't a great idea in Python since Python is impure and uses eager evaluation.

Note that such "meta" "reinvented control structures" appear all over the place in OOP, due to dynamic dispatch, e.g.

    class User:
      def renderPage(page, register):
        return page

    class Guest:
      def renderPage(page, register):
        return register
These are the two solutions to the "expression problem":

- FP restricts the allowed values but allows new functions to use them. For example, adding an Admin session would be hard (need to rewrite renderPage), but adding a renderAvatar function is easy.

- OOP restricts the allowed functions (methods), but allows them to use new values. For example, adding an Admin class would be easy, but adding a renderAvatar method would be hard (need to rewrite User and Guest).

> and even Scheme is 'the ultimate imperative' https://dspace.mit.edu/handle/1721.1/5790 !

Scheme isn't, lambda is. That paper is about how lexical closures alone are enough to simulate a lot of imperative concepts like assignment, loops, and gotos (they do cheat and use set! in a couple places, but the point of the paper is how much lambda application can do all on its own). Your broader point about Lisps not being particularly functionally oriented is correct, though; they're some of the most unopinionated languages you could imagine.

At least some of your assumptions are wrong: no-one is ignoring FP anymore.

They don't have a choice: the latest languages force it on you by not giving better alternatives. For example, their stiff object models. For instance, why the heck can't one easily attach an on-click method to a Java button, instead forced to use silly lambdas? It would be more natural and simpler. Over-focus on FP damaged the OOP improvements/designs. Each generation seems to over-do something, and then the pendulum swings the other way.

I don't understand your example (and I don't understand why you consider lambdas silly), but more importantly: if most of the world -- by which I mean, modern language designers and practitioners -- seems to be going one direction, and you insist in staying put, aren't you at least a bit worried you might be wrong about it? "It" being the partial adoption of FP idioms and features in this case.

Prolog has print statements you can stick anywhere. (It also has proper debuggers that allow you to go forward and backward and all that.)

Also, Prolog code is often "split into digestible chunks". Even more than other languages, in fact, because it's rather hard to write nested loops or nested predicates or nested many other things.

If you were ever taught Prolog, it seems that you weren't taught well. You might want to risk a second look.

>> Also, Prolog code is often "split into digestible chunks". Even more than other languages, in fact, because it's rather hard to write nested loops or nested predicates or nested many other things.

I find it very easy to write nested loops in Prolog:

To paraphrase the old saying, a good Python programmer can write Python in any language.

and correspondingly, a good Prolog programmer can write Prolog in any language.

Limerick: "Some rascal wrote Pascal in Haskell."

>> and correspondingly, a good Prolog programmer can write Prolog in any language.

And a really good one can even write it in Prolog.

> easier to debug

Incidentally, my impression has been that the use of a declarative (or pure functional) language leads to spending drastically less (if any at all) time debugging. If your code makes sense (to you), it will work. Or, as an old adage goes, "If it compiles, it works."

I can confirm this impression. In my opinion, there are several reasons for this:

First, many types of mistakes that are common when working with lower-level languages cannot occur at all when you use a declarative language. For example, with a logic programming language, you cannot accidentally double-free a pointer or write into unintended portions of memory.

Second, when working with a high-level language, the code is much shorter than if you had used a lower-level language. This makes it easier to debug, and also easier to get right already in the first few tries.

Third, once there is a mistake, it can be found in ways that fit the overall paradigm nicely. For example, in pure Prolog, if your predicate terminates and has a mistake, then it is either too specific or too general (or both), and you can debug both cases very systematically and even automatically, using built-in mechanisms of the language.

Declarative debugging methods for Prolog were pioneered by Ehud Shapiro in his 1982 PhD thesis, Algorithmic Program Debugging:


I also highly recommend Ulrich Neumerkel's teleteaching environment GUPU for systematically finding mistakes in Prolog programs:


You guys must come from another planet, because I spend 90% of my time coding in Prolog debugging my code. My code sure makes sense to me- but the interpreter frequently disagrees.

Additionally, there are some constructs that are pretty damn hard to debug. For example- deeply-nested recursive loops with multiple clauses and free backtracking; you never know which clause you're in or how deep in the recursion. Well- you do but it demands a great deal of concentration.

Or take my personal tracing nightmare- DCGs. At some point, you inevitably end up with something like this:

_24234 = _12734

Then you have to squint up the trace to find the two variables to figure out what's going on.

In my experience, the trick is to let go of the imperative reading since it becomes too complex in the situations you mention.

Instead, focus on declarative properties: Does your predicate (or DCG) succeed in cases it shouldn't? Then your program is too general, and you need to add constraints. Or does your predicate fail in cases it should succeed? In that case, at least one of the goals is too specific. You can use declarative debugging to find out which goals are responsible for the failure.

You can add the following definition to your program to "generalize away" a goal by simply putting a * in front:

    :- op(920,fy, *).
For example:

    pred :-
?- pred. now fails. Why? To find out, try to locate the reason for the failure by generalizing away goals. For example:

    pred :-
         * false.
This succeeds. So the last goal was responsible for the failure.

GUPU finds such fragments automatically for you! In a very precise sense, they are explanations for the problem. I know of no other programming language that admits this approach.

>> Instead, focus on declarative properties: Does your predicate (or DCG) succeed in cases it shouldn't? Then your program is too general, and you need to add constraints. Or does your predicate fail in cases it should succeed? In that case, at least one of the goals is too specific. You can use declarative debugging to find out which goals are responsible for the failure.

That's good advice and I realise it articulates the way I reason about why my programs fail when they do (which is all the time).

The problem is when reasoning doesn't shed any light on what's wrong. There are many such situations for me, often caused by the fact that Prolog still manages to surprise me even after many years of programming in it.

For example I recently found out that findall/3 (which I use liberally) actually copies terms with fresh variables causing subtle changes to the structure of a term that are hard to detect with the naked eye. It took me I think an hour or so of debugging my code until I realised (and accepted) what was going on and I still had to go on to the Swi mailing list and ask, just to be sure. This is not something you can reason about, unless you know the semantics of a specific predicate (in this case, findall/3) in detail.

In another recent example I was debugging code that backtracked over the combinations of a set, waiting for that one combination that caused the problem to reveal itself. It's very difficult to just reason about situations like this, when almost everything works but there is a subtle bug hiding in the wings.

The thing is findall/3 is not really in the standard pure part of Prolog. It has the form: findall(Template,Enumerator,Instances), and can be read as: `Instances is the sequence of instances of Template which correspond to proofs of Enumerator in the order which they are found by the Prolog system" (Craft of prolog) This reading is meta logical because it depends on the instantiation of variables in Template and Enumerator and on the proof strategy of the Prolog system. So for example if you used a meta interpreter to change the search strategy of Prolog from topdown to bottom up, then the result of a call to findall would change. setof/3 does not have this problem because it finds the ordered set and can fail (in contrast to findall which will return an empty list) You need to bear in mind we as programmers are concerned with answers to a query but Prolog returns proofs... I now try and avoid findall/3 in my programs and only use it when I am doing input and output from my core program.

That sounds like it could benefit from Design by Contract since it tells you in a lightweight way what a function will do. I actually submitted something like that to Lobste.rs recently when collecting all the implementations of contracts I could in various languages. Here's that one:


> Or, as an old adage goes, "If it compiles, it works."

This isn't said about the declarative nature of languages so much as it is about the benefits of powerful static type systems, in my experience.

It's quite easy to write Prolog programs that look right (to me, someone who doesn't know Prolog well) but that compile, successfully run to completion, and produce incorrect results (or don't terminate). In Haskell, the compiler actually rejects a huge amount of incorrect programs that I thought were going to work (and the fact that I thought that shows that the declarative nature of the code wasn't sufficient) and then once the typechecker is finally happy, the code often does actually work without further debugging (this is a common experience, and what the saying is referring to, I believe); in Prolog (or dynamically typed functional languages like Scheme), there's very little of that, and I'm left to catch most bugs on my own (even if like you said there won't be as many because they'd be so obvious in the first place).

Prolog might be exceedingly clear once you learn how to read it and it might eliminate errors that way, but "if it compiles, it works" is a terrible description of that. "If it looks right to someone who knows the language well, it works" is much closer, but not as punchy. The entire point of the original saying is that you don't need the experience with the language to get that, because the compiler itself will tell you if you're wrong when your types won't line up.

With mixed model programming (Python) I usually find it the most difficult part of the program to debug when functions are being passed around. SQL on the other hand either works or it doesn't. ( don't know Prolog but it does look interesting).

This is backwards. Debugging is much easier in a functional language because you already have your program division done for you, and immutability makes it much easier to understand what's going on since you can compare function inputs and outputs really easily, and e.g. if you ever accidentally step past the part you were interested in it's always safe to "drop frame" and step forward again, with the same behaviour, whereas if your program is mutating state then that state mutation has now happened and you can't ever debug it without restarting the whole thing.

I am curious if you used a visual debugger for prolog? (or for a procedural language) printing things to the console is the poor man debugger.

A big problem with visual debuggers is that they are too blunt of an instrument. Often I wish to examine specific things under specific conditions and thus use conditionals to narrow down what's being explored instead of bajillion breakpoints and/or log records. Debuggers could add conditional expressions, but then they'd be re-inventing WRITE statements anyhow.

As far as the nearby statement that maybe I didn't learn dissection correctly, that may indeed be the case, but was also my original point: perhaps the real problem is that nobody knows how to TEACH functional/logical programming right, and that's why they have not caught on in the mainstream despite being around several decades. I'd love to "get" the benefits, but it just won't click. Sometimes you have to teach programmers how to think, not just what to think.

> Debuggers could add conditional expressions, but then they'd be re-inventing WRITE statements anyhow.

SWI-Prolog's debugger can be invoked conditionally whenever you want from inside your running application. I've used it many times, and the code is simply:

    (Condition -> gtrace ; true)
It's true that you could stick a write there, but you would have to think about what pieces of data you will need. Your mileage may vary. I've found this style a lot more useful than "traditional" debuggers where you set breakpoints beforehand and hope you hit them, as you say.

> As far as the nearby statement that maybe I didn't learn dissection correctly, that may indeed be the case

The point was more specifically that you didn't learn Prolog properly (or at all) and don't realize that it forces you to dissect your code. As I said, in Prolog you cannot nest loops. You cannot nest loops. It's hard to write a big non-dissected piece of code if you cannot nest loops.

> perhaps the real problem is that nobody knows how to TEACH functional/logical programming right

I agree with regards to logic programming. We need better free tutorials. I don't particularly like the linked one, it feels like it doesn't start at a beginner's level. I would be interested if anyone who didn't know any Prolog before has managed to learn Prolog from it.

Regarding your last sentence: Yes, I can confirm that this has happened, to a reasonable extent of "learned" in the sense that people went on to write programs using this material. However, I do not know how common this is, since I almost exclusively get feedback from those who found the material suitable. There were definitely complete beginners among them. For instance, here is a discussion that happened a few months ago, where it appears that also beginners comment positively:


More recently, two university instructors started to use this book in courses that assume no previous knowledge about the language:

Andrej Bauer in his specialist elective course Principles of Programming Languages at the University of Ljubljana:


Norbert Zeh in his Principles of Programming Languages lectures at Dalhousie University in Canada:


Much of my current work is aimed towards making the teleteaching environment GUPU available in a free Prolog system:


This will help a lot to teach Prolog to beginners in a scalable way.

On a rather personal, and maybe controversial, note: I regard the self-taught Prolog master as prevalent as for example the self-taught piano, chess or Zen master. In my experience, learning Prolog properly requires dedicated guidance, and it would be highly unusual to achieve mastery without it.

Cool, good to see others are finding this useful.

> In my experience, learning Prolog properly requires dedicated guidance, and it would be highly unusual to achieve mastery without it.

There are two readings of this sentence. An unkind spirit might think you are saying that Prolog books cannot make you a master for some inherent reason related to Prolog, independent of any book. Whereas I would say that there is no inherent reason, it's just that currently existing Prolog books cannot make you a master because they are simply not very good. Or in other words, yes, guidance may be needed. But can it really not be guidance from a book?

In my experience, a dedicated instructor, an automated teleteaching environment (GUPU), and a lot of textual notes (either as a separate book, or integrated in the teleteaching environment, or both) is the most effective combination to teach Prolog, and removing one of the three either creates frustration for students or fails to impose the necessary direction.

In addition, for social and other reasons, you cannot write down in a book everything that advanced Prolog students should know. As a consequence, some things are better said than written, and I therefore think that a book alone is unlikely to be sufficient.

> In my experience, learning Prolog properly requires dedicated guidance, and it would be highly unusual to achieve mastery without it.

Where would you find such a master, though... If you are not studying at a university where Prolog is taught, you're pretty much out of luck, I think. (I am assuming that asking questions in newsgroups/IRC/etc is not a substitute for it.)

Yes, I agree. I was lucky enough to study where the current convenor of the Prolog ISO working group teaches, using a Prolog- and Elisp-based teleteaching environment that he has built over several decades.

As in other disciplines, if you are elsewhere, you may have to move to get close to a master. This is not so unusual, is it? Vienna is a nice city for studying logic, and does attract students from all over the world!

I learned logic programming before learning Prolog.

We got our introduction to it via the Tarkis' World book (90's edition).


Re: "but you would have to think about what pieces of data you will need." -- But with procedural you don't have to think much; the parts are right there to selectively examine.

Re: "in Prolog you cannot nest loops...It's hard to write a big non-dissected piece of code if you cannot nest loops." -- But nesting is good for modularity. It's sometimes called "step-wise refinement", which is a brilliant abstraction that is usually easy and natural to comprehend. Nesting is how we have been splitting up society since the dawn of civilization: countries, states, counties, cities, etc. (or equiv.) Similar for big militaries. (Maybe this is a meta form of Conway's law? Either way Conway was on to something.)

>> As I said, in Prolog you cannot nest loops. You cannot nest loops.

What do you mean here? My earlier comment was jokular, but you can definitely nest loops in Prolog:

etc. That's a nested loop, right there.

Do you mean it in the sense that it's still the same predicate rather than a consecutive change of scopes?

Most concretely I mean it in the context of the common auxiliary-predicate-with-an-accumulator pattern:

    foo_bar(Foo, Bar) :-
        foo_bar_aux(Foo, Bar, 0).

    foo_bar_aux(Foo, Bar, N) :-
Which in a language like OCaml would be hiding the loop in a recursive function defined inside the main one:

    let foo_bar foo =
        let rec foo_bar_aux n =
        foo_bar_aux 0
So yes, I think this is what you mean by being the same predicate rather than a "change of scopes".

Of course there are many loop-like things that you can nest. If the arguments align just right, you can also use `maplist(maplist(maplist(baz)), ...)` instead of three auxiliary predicates, but I don't encounter many cases where this is possible or desirable.

I see what you mean, thanks. I think we're talking about the same thing then.

Your Console.log will not help you in a recursive function

If you have the right tracking variables, it's great for the job: "console.write(currentNodeID, parentNodeID, depthLevel, nodeVisitCount);" And one can wrap that into a conditional if they don't want to echo the entire journey. It's usually good to create such tracking variables in non-trivial systems anyhow. Bleep happens.

Correction: should be "nodesVisitedCount". Also note that output formatting was sacrificed for brevity of the example.

> printing things to the console is the poor man debugger.

You may want to try working with network before making this statement, when you cannot step through a function because of timeouts. Or maybe even looking at tracing debuggers, like strace, ltrace, or Erlang's gdb, which all do little more than "printing things to the console".

Logging (and logging-like tools) is a very important way of debugging services, you know.

Sure, if you do not have the debugger you use the logging, but you will miss the main features of a debugger like see the entire call stack, all the variables and parameters at each call stack level, you can evaluate expressions in that context.

I use logging to log problems that would happen on a user machine or server where I can't open the debugger and see it. When an error happens on a server or a customer machine and I can't use the debugger I have to guess the problematic section and put lots of logging and try to reproduce, then add more logging and so on until I find the problem.

Debugger are much better but I notice my other colleagues do not use them and prefer the console.log or similar

Each do better under certain situations. One is a hammer and another a tweezer. Different debugging jobs/steps need different tools. But the real question is whether the hammer/tweezer kit that comes with Prolog is better than the hammer/tweezer kit of other paradigms/languages.

In my personal opinion and observation, the hierarchical step-wise refinement approach is quicker for most developers to grasp, learn, and debug. Maybe if those of us who don't see the grokkability benefits try it for 50 years, it will finally click. But you better hope to be in management by then because the programming field doesn't like "old people". I'm just the messenger.

Prolog is ideal for some kind of problems, good programmers should know what logic programming is and what problems are perfect for logic programming, then if in real life you hit such a problem you can chose prolog or a library that does logic programming.

I'd like to see a comparison to the alternatives to illustrate objective benefits. I've been down that road before in my-language-can-beat-up-your-language debates, and the end result is usually, "Well, my approach looks easier and simpler TO ME; you just think wrong." Other times they do queries in the app language when they should be using a database.

I don't think I understand your point, prolog is a logic programming language, some problems are easy to express like facts and rules so Prolog is a good solution, you can see examples in this HN thread of real usage.

It is like recursion, some problems can be solved nice with recursion but you do not use recursion everywhere. Also you can use a library in your favorite programming language and not use prolog, examples were presented in this thread.

My point is that a good programmer should know about the concept not to use it everywhere, I never had the chance to use prolog so far for my work.

Btw I am also past the stage where I have a favorite language and OS, every piece of software has it's issues.

The examples I see are what I'd call "lab puzzles" and not practical real-world enough. I invite you to point out a specific example that's highly practical/realistic.

Did you see the real world uses? like prolog was and may still be used in Windows? The puzzle examples are for educational purpose.

Again I am just a noob, I learned Prolog and did some basic things years ago, a cool page I used to learn was this tutorial that implements a text based game , more refreshing then the tutorials with puzzles or the familiry


The situation is maybe similar with GA(genetic algorithms) when you learn it you have some simple examples, when you want to solve a real problem things get complex fast(how to encode the problem,what fitness function should I use ...)

> procedural is just plain easier to debug because you can more easily split it into digestible and dissect-able chunks: divide-and-conquer.

That's how you think and program in FP as well. Not just debugging, I mean: FP is all about divide and conquer. In fact, Wadler's seminal paper "Why Functional Programming Matters" [1] is all about what he calls modularity, i.e. splitting a problem in parts and then gluing it back together.

[1] https://www.cs.kent.ac.uk/people/staff/dat/miranda/whyfp90.p...

Do you know about the four-port Prolog debugger, a.k.a. the Prolog tracer? If not, it's used to actually step through your code, one call at a time. You can set breakpoints and everything. Swi-Prolog even has a graphical interface for it (though I personally prefer the textual interface).

Bottom line- yes, you can add print statements etc, but you can also directly debug your Prolog code just like you'd do for an imperative language.

> you can more easily split it into digestible and dissect-able chunks: divide-and-conquer

Eh... What? It's procedural code that can change anything anywhere and thus can not be easily separated into independent pieces.

Picolisp has a Prolog engine built in.


The important thing to remember about non-Prolog "Prolog engines" (which I think Pico's "Pilog" is), is that for large applications and data sets, generally for anything non-trivial, you need an industrial Prolog implementation for it to be fast.

Tiny Prologue-style "engines" can be fun and educational, but they are not a substitute for actual Prolog.

Im taking note.

Question, is there any industrial strength Prolog for Commom Lisp, besides the Allegro product?

Very good material. Prolog (or the ideas behind it) really needs a resurgence.

It's still going strong in the industrial side of things (like SCADA). You just don't hear much about it.

I've never heard of Prolog alongside SCADA. Can you be more specific? Like in a plant or what?

You know, I'm not sure where I got that from. Maybe I was confusing prolog with the ladder logic, or other languages from IEC 61131-3? I'm really not sure... but after some duckduckgoing I've come to the conclusion I was wrong in my previous statement... sorry.

No problem. Ladder logic and C I can definitely see, but SCADA would have blown my mind :)

Prolog was one of hot A.I. prospects of the 1980s. It could be a way to build an expert system. Japanese fifth generation computers were supposed to,run it fast.

Then A.I. boom fizzled. Neither the first or last business cycle.

The fifth generation project didn't actually use prolog, but instead a custom prolog-like language.

However, there's still work being done adjacent to 5G using real prolog, by people who thought it was an interesting idea: https://github.com/bitlaw-jp/the-constitution-of-japan

Prolog is still used in the development of expert systems, though when those systems are shipped, they have often been rewritten in an imperative language based on a Prolog prototype.

The fifth generation project, from what I understand, died because of a dedication to custom hardware using already-obsolete fabrication tech -- mid-70s flipflops-on-IC tech over late-70s microprocessor-on-IC tech, wire wrap, etc. There was an inability or unwillingness to iterate on original committee designs. By the time it folded in 1998, everything involved looked like stone knives and bearskins. Those boxes wouldn't have been able to run it fast by 1980 standards.

I recall at college in 1989 our lecturer had brought in a chap who'd done an entire airline baggage management suite written in prolog (turbo prolog, to be more accurate). The demo we saw was absolutely gorgeous -- real time updates on-screen to changes in baggage locations elsewhere (in the system). This was on i386-equivalent machines, before anyone gets overly nonchalant. He explained that he'd been up against two competing tenders - one in COBOL, the other in C - both higher priced and longer delivery times.

Nonetheless, I still never really properly grokked prolog, as much as I tried. I feel my brain had already been wired away from declarative languages. Perhaps a poor cop-out.

Prolog loses a lot of its magic when you realize it's just DFS the language.

C loses a lot of its magic when you realize it's just memory but with syntactic sugar.

A fun fact that may appear surprising at first: In contrast to Prolog, C is actually not Turing-complete!

Here is more information:


In C, you can only represent an unbounded tape if you assume the existence of an unbounded stream, i.e., in a so-called hosted implementation of C where unbounded streams are available. But unbounded streams cannot be implemented in freestanding C, even if unlimited memory were available!

... plus unification the magic.

What is DFS?

They’re referring to Depth First Search

I imagine that there must be some system similar to prolog but with a probabilistic model of inference, for example ground rules could be sorted by statistical properties to accelerate inference. Just like machine learning trees select features for classification (perhaps using cross-entropy minimization).

Edit: I found Journal of Machine Learning Research 7 (2006) 307–342, Kernels on Prolog Proof Trees: Statistical Learning in the ILP Setting

I wrote something like this, based in part on ideas from PLN: https://hackernoon.com/mycroft-a-predicate-logic-language-ov...

From my experience the hardest part is interaction with other programs and libraries. Still cannot make pipes work in https://github.com/XVilka/r2pipe.prolog/blob/master/r2pipe.p...

What's the problem you're having with it? I think this was discussed on irc recently? This was the example I gave of using process_create which works:

    write(In, "hello"), nl(In), close(In), 
    read_line_to_codes(Out, Codes).

Well, I need to keep both pipes open during the "session", since usual radare2 session involves hundreds of commands and their output. Opening a new pipe and process every time is a wrong way to go in my opinion.

You should be able to open the pipes at the beginning of the session, and close them after, passing the pipe variables around as needed. Prolog wouldn't be any different to other programming languages there. What errors are you getting? Maybe post to the swipl mailing list?

I will send a message to mailing list then, thanks for the suggestion.

Practical logic programming in Clojure : https://github.com/clojure/core.logic

A modern logic programming language: https://github.com/mcsoto/cosmos

Core.Logic is a miniKanren implementation which is quite different from Prolog (smaller and ultimately less powerful, but cool nonetheless). Many/all miniKanren solutions also work in Prolog, but the vice-versa is not true.


Turbo Prolog is now known as Visual Prolog. Just for those who wondered what happened to it and might want to ude it.

Prolog is crazy-making and awesome in all the right ways. For some problems, it can be very concise.


used prolog many years ago, how is it now? is it evolving?

it IS the language for expert systems, curious if it can be the best language for general AI developments.

Just installed prolog under ubuntu and vscode has a plugin for it, all look good, will play with this more.

The core ideas behind Prolog have the feel of something discovered rather than invented, and I bet we'll see significant new forms of it yet.

My first encounter with it was while taking a Principles of Programming Languages course that I hadn't been attending regularly... We had a test on Prolog four days out and I hadn't learned a bit yet. I found a couple books in the library—one of which was excellent—and was soon so absorbed by it that I learned everything I needed out of sheer interest and got a perfect score on the test.

Some six years later—the whole time not having touched Prolog or even thought about it that I can recall—I got an idea for a programming language. It would be an attempt to model 'schemas' in the human brain. Programs written in it were supposed to reflect human conceptual structure. Once you defined the concept for something, let's say a chair, you could ask it to generate instances. It was in part going to be a language designed from the ground up for procedural content generation.

There were some promising things about it (like a very concise/natural way of defining 'analogy'[0] and other aspects of thought), but ultimately, a couple years after coming up with the idea, I picked up a copy of The Art of Prolog and read the first few chapters (for general edification, and so that I'd be more aware if it were applicable to something I was working on). By that point, I felt pretty convinced that my language idea would only be superficially different from Prolog, and that most of the really interesting ideas were already captured by that language.

One up side to the project was that, after coming up with a pretty detailed plan for how the runtime would work, I decided to write this 'abstract visual debugger' to help me build it (the runtime would involve loads of complex data structure interactions that prior experience taught me were much easier to wrangle if visualized). It still seems like a promising project to me, and I work on it when I get free time. Video here: http://symbolflux.com/projects/avd

I also still have a bunch of random notes/ideas I took down about the language idea I had. Warning—I was pretty excited and they sound a bit like that ravings of a madman: http://symbolflux.com/IntellAgent-ideas.txt I also wrote an ANTLR grammar for it, but that's probably not as entertaining.


  +{original: Type, analogy: Type}
     Equivalence{Particularization{Generalization{original}}, analogy}
Equivalence etc. are all relation constraints; the run-time would discover a value for 'analogy'. The idea is to generalize whatever the thing is ('increases generality of the attribute Types' —from my notes), fill in values of some params to particularize, and have the constraint held that the discovered analogy is Equivalent to the original (sharing the same structure, which had a definite meaning in the context of the language).

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact