Hacker News new | past | comments | ask | show | jobs | submit login
Interpreter for a simple Lisp, Written in Prolog (github.com)
120 points by martinlaz 54 days ago | hide | past | web | favorite | 69 comments



It seems that it takes far fewer lines of code (and correspondingly simple program model) to implement LISP in Prolog than both imperative and functional languages.

Once I asked a Prolog expert at a conference "Prolog is truly mind-bending and powerful, but why don't we see it used more often? where are the jobs so I could get one" and they said that for those who use Prolog in production it is a real competitive advantage that they don't want to announce to the world; plus some are state actors and quasi-state actors who use it for huge scheduling and optimization problems and like their secrets secret. I think I believe that.


" they said that for those who use Prolog in production it is a real competitive advantage that they don't want to announce to the world;"

with due respect to prolog (which is a mindbending language), the proponents of every niche language say the same thing. "it is so powerful that users don't talk about it and that is why you don't hear about it"

I've heard many variants of this with respect to Forth, for example.

The reality is that devs can't stop talking about the languages, tools, and frameworks they use.

The simpler explanation is that next to no one actually uses prolog in productions, because it is, well, a niche language (which doesn't take away from its coolness)

Color me skeptical about "secret weapons".


I use Prolog in production. :-)

I know many people that do too and don't talk about it unless you move in the Prolog world.

Here's a stock broker using it https://dtai.cs.kuleuven.be/CHR/files/Elston_SecuritEase.pdf

Java Virtual Machine specifiction is verified using prolog ", implemented the Prolog verifier that formed the basis for the specification in both Java ME and Java SE." https://docs.oracle.com/javase/specs/jvms/se7/html/jvms-0-pr... Prolog code -> https://docs.oracle.com/javase/specs/jvms/se7/html/jvms-4.ht...

https://kylecordes.com/2010/the-prolog-story

IBM Watson was written in C++ and Prolog


Thanks a ton! I was hoping to hear more about people like you when I wrote that comment.


how lively is the prolog market ? do they accept semi noobs ? (read more than half of Bratko book)


Few people use it overall, yes. That doesn't mean it's not a differentiator in some areas.

You can often find a Prolog system behind complex scheduling and resource planning systems for example. It's no longer pure Prolog nowadays, you also have constraint logic programming and other solvers hooked in (see my other comment on ECLiPSe CLP, http://eclipseclp.org/) but Prolog is still often the host language. The ECLiPSe web page has for example a reference on Opel, the car maker, optimizing its supply chain with it. Sicstus (https://sicstus.sics.se/) is also present in this space.

In a very different domain, IBM publicly commented on how they use Prolog in their Watson system, the one who won Jeopardy (the name was over-used afterward): https://www.cs.nmsu.edu/ALP/2011/03/natural-language-process...


Forth! "Going Forth to Erlang" by Manoj Govindan is a great talk on the language: https://www.youtube.com/watch?v=RpsZ1Ka2HPQ


Paul Graham himself considered Lisp his secret weapon, and it was very successful at that.


Lisp ain't no "secret weapon" anymore. Clojure has restored historical awesomeness of Lisp. It's been rated as the most paid PL in most popular dev surveys of the past two-three years. Companies like Apple, Cisco, Amazon (even NASA) are using Clojure today in production.


My statement was in the past. Look up for Paul's essays on this. It happened in the nineties, if I'm not mistaken.


"COBOL has historically been very secretive and low key. Its domain of use being very secretive and low key. COBOL programmers rarely work on systems that would allow for open internet chat regarding details, let alone existence. It is a tribute to the professionalism of these programmers that most people rarely, if ever, hear the name COBOL, a programming language with billions of lines of source code compiled and in production around the world over half a century." -- GnuCOBOL FAQ


Do you know of anyone combining prolog with integer linear programming? I modelled quite a complex convex optimisation problem as a mixed integer linear programme, but it was very buggy at the beginning to say the least.

I was left wishing that there was a better way to express the logical constraints, which in any case can be converted to integer linear equations, while keeping the linear equations where needed and possible.


Markus Triska, the author of the Lisp interpreter above has also contributed library(simplex), a linear programming library for Swi-Prolog:

http://www.swi-prolog.org/pldoc/man?section=simplex


From first look, this library doesn't seem to provide a way of translating logical constraints into MIP constraints. I'm sure that could be implemented in terms of the library though.


> I was left wishing that there was a better way to express the logical constraints, which in any case can be converted to integer linear equations, while keeping the linear equations where needed and possible.

SWI-Prolog's clpfd library allows you to mix linear and non-linear integer constraints with "reified" versions that allow you to write implications and such. Silly example:

    ?- [A, B, C] ins 0..3, A #< B #==> C #= A * B, A #>= B #<==> C #\= B.
This means "for values A, B, C each in the range 0 to 3, if A is less than B, then C is equal to A * B, and A is greater than or equal to B if and only if C is different from B". Is this the kind of constraints you were thinking of?


I read in several places (that I don't remember unfortunately) that prolog experts discouraged the use of SWI clpqr because it contained major mistakes. Does anyone know if it is still the case and if clpfd suffers from the same issues?


I doubt that, but it's hard to tell as this is kind of vague. CLP(FD) talks about integers, CLP(Q) and CLP(R) talk about rationals and reals, respectively. If you approximate rationals or reals with floating point, you will get into situations where you compute nonsense, so in that sense at least CLP(R) has issues, but that kind of issue would have nothing to do with CLP(FD).


https://github.com/SWI-Prolog/packages-clpqr/issues/4

Also cited at https://www.metalevel.at/swissues

I also read it on SO, but can't find it. I do however remember that it was said that the SWI lib was ported from the SICSTUS one, and that mistakes were added in the process, and that it couldn't be recommended for that reason.


Out of interest, do you know whether there are any problems with the SICStus version, and is there anything wrong with using SICStus over SWI?


It was stated that the SICSTUS version was correct, but I am no expert. To my knowledge, SICSTUS is the prolog Rolls-Royce, so there's certainly nothing wrong with that.


Seems like I was looking for CLPQR with logical constraints, which doesn't seem to exist.


If you want to do constraints, I'd argue that ECLiPSE prolog is probably best. You can also use a variety of solvers from it.


ECLiPSe CLP [1] (no relation with the IDE) is a Prolog implementation with constraint logic programming support and interfaces to lots of solvers, including MIP ones [2]. So I guess the answer is yes, although you may not find lots of public use-cases. Eclipse CLP was bought and open sourced by Cisco.

[1] http://eclipseclp.org/

[2] From http://eclipseclp.org/features.html: "Solver Interfaces. ECLiPSe interfaces to the COIN-OR, CPLEX, Gurobi and XPRESS-MP linear and mixed-integer programming solvers, and the Gecode finite-domain solver. Other interfaces are under development."


Seems to be exactly what I was looking for. Thanks.


Thanks to all for the sources.


Every new powerful tool/language probably offers a competitive advantage to some companies.

Yet, nothing is really a secret in the world of software development.

Why is prolog among the rare few who can keep a secret ?


Because prolog requires a different mindset.

When I encountered prolog I tried to understand it, and on the assigment -something about stacking dice having same or different values on the faces, I don't recall it- I managed to write prolog with explicit backtracking. That means that I had several "functions" with different fixed parameters that where the recursion level where they were at, and like a hundred lines of code for this assigment.

TA called us in, trying to understant that, we explained and he showed us the "real" solution, in a handful of lines of code. Then it clicked, I understood it.

Without that moment I would find prolog too alien, and I guess most people do, that's why it isn't more present in the world. It's too different to "actual programming", that is, it's too different from imperative programming. Even haskell and the like are closer to python and the like than prolog. Heck, even lisp can be closer to them.

It's not just learning a new programming language, it's learning a new way to think, not everyone can do that.


The most effective way prolog has been taught to me in the form of logic puzzles that we are all familiar with- "We have 6 people who work 1 job each, here a list of possible jobs, there is a list of possible people, the cook is male, the doctor went golfing with the cook on friday, joshua doesn't golf..."

After that, you move to programming checking validation of facts and finally to behavior that resembles 'normal' programming. I agree that it's a massively different mindset and the capacity to grasp prolog quickly or be interested in prolog programming should be considered a reflection of the adaptability/curiosity of a person, IMHO.


Prolog tends to "click" once you understand it's a query language for relations, rather than a general programming language (although you can do a lot on top of it you bend it just enough).


It’s not like you can’t write backtracking search in other languages. Prolog’s power seems to be in the ease you have expressing new versions these kind of problems, but I’m skeptical that it actually has an advantage for solving large workflows.


There are other factors:

- paradigm is high level, borderline esoteric to the main dev stream

- it's not the IT culture, it's expressive and mathematical, not tool nor community oriented

people in this area enjoy their own quests and don't create huge fads


>> - it's not the IT culture, it's expressive and mathematical, not tool nor community oriented

Prolog is actually much closer to the computer science mindset than the statistical machine learning algorithms that are so popular right now. The kind of mathematics required to understand it are already in the computer science student's toolbox- discrete, combinatorial maths and logic.

The lack of toolkits and frameworks like in javascript, say, is the result of poor adoption, not its cause.


I'm not sure I get the ML parallel, but tooling is often a word thrown by average groups as a feature required for a language to thrive or be valuable. For instance python/java etc often want these .. whereas common lisp didn't really. And often it's because the language paradigm/design makes tooling a thing you're not looking after. The few times you need something, you can hack it in on your own and keep your day going. Whereas other languages are heavy and require tooling and people thinks that's the norm and it's even a proof of professionalism to have and master tooling.


Well, it looks to me like everyone is falling over themselves to learn about machine learning at the moment- I often see posts on HN that ask questions about how one gets started with it without first learning statistics, calculus and linear algebra and so on.

I find it strange that there should be so much interest from programmers for a field that is completely alien to their background. It's a bit like if neurologists suddendly started asking "how does one get started on quantum entanglement, without first learning about general relativity and quantum mechanics?".

And I find it strange in particular when compared to the lack of any such interest in logic programming, for which programmers, like I say, already have the necessary background.

So, my conclusion is, the fact that most programmers are not interested in prolog has nothing to do with how hard or easy it is to learn the language and much more with practical considerations, like what happens to be popular in the industry right now.


Oh that's clearly one factor.. ML is super shiny, prolog isn't.

That said, prolog is also very non procedural (unlike ML dsls which are layers on top of mainstream OO). It shifts your thinking into metalevel / denotational very quickly (since you don't think about one execution path but nested nearly pure enumerations). I think that's even more mind bending than tensors..


Prolog is a great idea with one huge flaw that annoys me so much I don't want to use it even for side projects. It complected rule declarations and search algorithm hints.

As for your comment about size, Prolog was designed in part to parse things, so it's really good at it. Some other programs in it are trickier and require a certain affinity to Prolog way of thinking.

If someone made a Prolog with cleaner syntax that completely segregates rules from hints, I would probably use it a lot.


A while back I developed an approach to this in haskell; the idea is to use a logic monad transformer ("Backtracking, Interleaving, and Terminating Monad Transformers" 2005), working at a monad which implements a type class 'Costly' with a single method 'cost'. The implementation of 'cost' can terminate branches of the logic program or transform it in other ways. Then polymorphism over this monad accomplishes the generalization of the logic program over different search strategies.


If you want nonstandard search or other nonstandard semantics for Prolog, you typically write a metainterpreter. See some examples at https://www.metalevel.at/acomip/ (from the same author as the Lisp interpreter discussed here). It's really easy to implement something like breadth-first search, or depth-first search with iterative deepening, or random search, or some sort of best-first search.


I got into Prolog last year, I really love it. I came across the fascinating article "Who Killed Prolog?" (2010), from Maarten van Emden's blog A Programmer's Place.

"Compared to these three [Fortran, Lisp, Smalltalk], Prolog has fallen far behind. This was not the case in the early 1980’s, when Prolog had caught up with Lisp in capturing mindshare of what you could call non-IBM computing (to avoid the vexed term “AI”). Hence the title of this article. As culprit (or benefactor, depending on how you look at it) I identify the Japanese “Fifth-Generation Computer System” project, which existed from 1982 to 1992.

Even for those who were aware of the project at the time, it is now worth reviewing its fascinating history in the context of its time. This article is such a review as well as a theory of how Prolog was killed and how Lisp was saved from this fate."

Richard Grigonis commented in 2014 (incorporated in a Postscript to the article):

"The funny thing about this is that, in 1982, as I recall, Fifth-Generation Computer Systems project director, Kazuhiro Fuchi, came to Columbia University in New York, along with Edward Feigenbaum, to give a speech and answer questions of students. Feigenbaum was railing about how the Japanese were going to take over AI and the world, and we should better fund AI researchers in America or we would all be left behind in the dust. It was as if he was using Fuchi as a prop to get more excitement in America for AI. ...

When the question-and-answer period came..., I raised my hand and said, “I hate to be the fly in the ointment here, but this whole thing is based on Prolog? A sort of embodiment of the first-order predicate calculus? Even with trimming the search space in various ways, paramodulation, etc, if you use logic as a knowledge representation scheme it always leads to a combinatorial explosion, doesn’t it? Even with parallel processing, if you encode all of the facts of the real world into this thing, you will end up with ‘all day deductions,’ won’t you?”

Feigenbaum looked around uncomfortably, swatted the air like he was indeed being pestered by a fly, but then, amazingly (and much to his credit) said — and very quickly at that — “Well, yes, he’s right. BUT Americans need more support because the Japanese are advancing the field!” Feibenbaum quickly moved the session forward.

It was the strangest moment. My friend Mike, who had tagged along to watch know-it-all me get verbally obliterated by this erudite group, was stupified, incredulously uttering, with a tone of disbelief in his voice, “Oh my God Richard, you are right!” ...The top American researchers knew the FGCS was completely flawed, but we were humoring them and making a big deal of it so we could get better funding for other, LISP-based projects in the U.S."

Van Emden concludes:

"A quick way to get an idea of the promise of Prolog is to read “Prolog: The language and its implementation compared with Lisp” by D.H.D. Warren, ACM SIGPLAN Notices, 1977, pages 109-115. Warren shows that in the four years of Prolog implementation development an efficiency in terms of execution speed and memory use was reached that equalled what was reached by a quarter of a century of Lisp implementation development. This is remarkable for a language that in some repects is more high-level than Lisp.

The Japanese were smarter than researchers like Feigenbaum in that they took the trouble to discover that Prolog was a different animal from resolution-based automatic theorem provers, where the search space was pruned by the paramodulation technique you mention and by several others. Prolog is also based on resolution logic, but its inference is restricted to mimicking the function definition and the function call mechanism that has been the mainstay of conventional programming since Fortran. As Lisp also relies on this it is not surprising that since 1977 their performances are similar. In applications where Lisp need not search, Prolog does not search either.

I don’t want to suggest that Feigenbaum should have switched to Prolog, although I may have told him so during the rare and brief meetings we had in the 1980s. My present opinion is that the difference in the strengths of the two languages does not make one of them overall superior to the other. Other things being equal I might now recommend Lisp because its development has steadily continued since the time when interest in Prolog plummeted with the demise of FGCS.

I believe that FGCS was a plausible idea and was similar to the idea behind the Lisp machines of the time. FGCS failed because it failed to come up with a convincing demonstration. Such a demonstration should have come in the form of at least a prototype comparable to a Lisp machine. It could have been clear from the start that a project headed by government bureaucrats and staffed with technical people on loan from big companies had little chance of coming up with a convincing prototype.

A Prolog version of a Lisp machine was at least as promising as the Lisp machine itself. I believe that the failure of the Lisp machines was not predictable. Around 1990 everybody was caught off-guard by the rapid drop in price and rise in performance of commodity PCs. There were a few exceptions: Larry Ellison and Linus Thorvalds come to mind.

“The Japanese Did It” is not the correct answer to “Who Killed Prolog?”. Prolog was killed by the failure in the early 1980s of mainstream AI researchers to find out about Prolog, by their willingness to play along with the “appropriate responses” to FGCS, and by their use of FGCS’s failure as an excuse for continued neglect of a promising alternative to Lisp."

[0] https://vanemden.wordpress.com/2010/08/21/who-killed-prolog/


> Such a demonstration should have come in the form of at least a prototype comparable to a Lisp machine

There were a few. See for example the Melcom PSI:

http://museum.ipsj.or.jp/en/computer/other/0009.html

Mitsubishi planned in 1986 to sell 500 of them for a price of $199000 each.


actually $119000


>> if you use logic as a knowledge representation scheme it always leads to a combinatorial explosion, doesn’t it?

The real question of course is- what doesn't? What is a knowledge representation that has expressive power equivalent to the first-order predicate calculus, but doesn't cause combinatorial explosion? The answer is- no representation that we know of.

Indeed, much of the early AI work with Lisp also used first order logic-based representations and was also prone to combinatorial explosion. This was one important accusation levelled at the field by the author of the Lighthill report, that brought on the first AI winter, at the time of John McCarthy and Donald Michie.

Van Emden points out of course that combinatorial explosion was a concern in those very early days of logic-based AI and logic programming in particular, but not anymore.


In ~1987, I had an AI professor who described the Fifth-Generation Computing Project as starting from a plan written by two researchers. One wrote "We need to do speech recognition, natural language understanding, X, Y, Z, etc." The other followed that up with "So, we will build a Prolog machine."

This was shortly after the demise of Lisp machines, which our CS department had a fair stack of. The gist of the discussion was that a) language doesn't matter that much, and b) this was yet another example of special-purpose hardware being unable to keep with general-purpose hardware.

This was also the professor who gave a talk at the computer science department of Texas A&M and came back saying, "that's not a real school." :-)


Great read!


In my (limited) experience, prolog is amazing for a subset of what a useful program might want to do, but it can be really difficult for the remainder, which is why I’m personally a big fan or the miniKanren-based logic libraries for Scheme, or Clojure’s core.logic — you can use logic programming as a library for the subsets of your code where it makes sense and you can use a more traditional language for everything else, seamlessly in the same codebase.


This is old code, but it has sparked a great discussion here.

In the 1980s I spent months writing a prototype planning system in Common Lisp (which is a primary language for me, and I wrote a CL book for Springer-Verlag back then). Even though I was a Prolog novice, I was able to rewrite my entire prototype in Prolog in one week. Admittedly this was because the second implementation of something is always easier, but also because Prolog was such a good fit to this particular problem.

I remember reading The Fifth Generation Computer about Japan’s AI efforts - enjoyed the book but it didn’t really click for me, mostly because I had a Lisp Machine and considered that the `one true path` for AI development at the time.


Prolog was one of my favourite languages during my degree, to the point that I participated in some logic programming championships.

Would like to have some excuses to actually use it on my daily work.

As side note, until our compiler design lectures decided to switch the required implementation language to the newly arrived Java, it was possible to choose whatever programming language to implement our assignments.

However Lisp and Prolog were off limits as they would make them too easy!


This brings back fond memories as writing a lisp interpreter in prolog was the final project for my survey of programming languages course in university. Didn’t know people would still be doing it 20 years later.


As someone who is implementing a Lisp interpreter (admittedly in C#, not Prolog) I've decided that this sort of project is somewhat analogous to restoring a classic car. It'a a good way to get a handle on earlier times.


This was a fairly common project when I was in University, except we had to do Scheme in Prolog. https://github.com/FigBug/scheme

Prolog was a mind bending language. Some things that would be so simple in another language were so hard, and somethings that would be hard, were so easy.

I never really got my head around Prolog, just enough to get the assignment finished. But I do remember it as being one of the more interesting assignments I did, interesting enough that I've managed to keep the code for 20+ years.


I suspect that an optimization problem I will have to face in a bit of time might be a good fit for Prolog. What would be the best resources for a quick, relatively high level overview/introduction, then a more involved guide for learning it?

This question has been asked elsewhere, but is there a way to pass data back and forth with other languages, and let them control the execution flow? A C interface would seem like a good IR to target.

My apologies if I used the wrong terminology, I am not intimate with the more academic nomenclature.



Some online books... require you to manually translate Prolog programs to Lisp forms that are no longer valid Prolog syntax.

I thought Lisp was commonly used to create domain-specific languages with different syntaxes. Wouldn't this be old hat for someone experienced with Lisp? (Asking because I don't have a lot of experience with Lisp.)


Often Prolog implementations in Lisp used a lisp-like syntax. They were not concerned with reading standard Prolog code.

Several implementations support(ed) standard Edinburgh-Prolog syntax. For example the LispWorks implementation of Prolog supports both: http://www.lispworks.com/documentation/lw71/KW-W/html/kwprol...


There is a small prolog implementation in the book "On Lisp" done in 200 lines of code or so.


a more complex attempt at a Common Lisp in Prolog:

https://github.com/TeamSPoon/wam_common_lisp


This is a weird nit, and I might be wrong, but isn't the "pl" extension usually used with Perl scripts?


Prolog uses .pl and because of the Perl conflict occasionally .pro: http://www.swi-prolog.org/pldoc/man?section=fileext

I still found a

    autocmd Filetype pl set syntax=prolog
in my vim config from my Prolog days.


Exactly. Perl has insensitively hijacked it.


Now you know how forth programmers feel about F#. (.fs)


Funny. Chapter 24 of Paul Graham's On Lisp is about implementing Prolog in Common Lisp.


It's obligatory for any Lisp book to include a Prolog implementation.


Interesting. Do you know why?


Implementing some kind of Prolog (and also other logic formalism) was common in the AI boom. Some were primitive (for example only using Lisp-like syntax), some were quite extensive (then also providing Edinburgh-Prolog syntax). Most AI tools included some reasoning facility, often in the form of some Prolog. The two commercial Common Lisps (Allegro CL and LispWorks) still offer Prolog implementations.

One of the more extensive treatments of implementing Prolog-like features can be found in "Paradigms of Artificial Intelligence Programming" (PAIP) by Peter Norvig.

Book and code available here:

https://github.com/norvig/paip-lisp


Interesting, thanks. Will check out that book, had heard about it.


"Prolog is similar to Lisp on the main points." (ibid)


In my opinion, this is true for some main points, but definitely not for all of them.

A key difference between the languages is in how easily you can reason about programs. In Prolog, if you program in the pure monotonic subset of the language, you get certain algebraic guarantees that are not available in Lisp. For instance, for pure Prolog code and a declarative reading, removing a goal can at most increase the set of solutions, and removing a clause can at most decrease the set of solutions.

In Prolog, these guarantees can be used to test and debug your programs by logical reasoning. In fact, to a significant extent, you can even automate debugging due to these properties. Such techniques were pionereed by Ehud Shapiro in his 1982 PhD thesis, Algorithmic Program Debugging:

http://cpsc.yale.edu/sites/default/files/files/tr237.pdf

The thesis is an excellent read, and I highly recommend it! Specifically, quoting from page 155:

Although Lisp is considered a high level language, it is not clear that it encourages precision and clarity of thought. As one Lisp hacker puts it [91]: "Lisp...is like a ball of mud. You can add any amount of mud to it...and it still looks like a ball of mud!". This aspect of Lisp is the hacker's delight, but the theoretician's nightmare. It is known that one may implement without too much effort a reasonable Prolog in Lisp. However, the issue is not implementation — the issue is the method of thought. Based on my experience with both languages I maintain that one's thoughts are better organized, and one's solutions are clearer and more concise, if one thinks in Prolog rather than in Lisp.

Another key feature of Prolog is that constraints blend in seamlessly, leading to an important declarative programming paradigm called constraint logic programming (CLP). This is the key reason why Prolog is frequently used to solve combinatorial tasks in practice, and often an important motivation for buying a commercial Prolog system.


> Although Lisp is considered a high level language

Lisp is generally not restricted to a particular 'level'. It also comes out-of the box with support for different programming paradigms - but no particular language support for logic programming.

> if one thinks in Prolog rather than in Lisp

Lisp supports high-level programming in logic by zillions of extensions. This has been popular throughout Lisp's history. Thus if we have a problem which is best solved by some kind of Logic, then it would be great of the Lisp system could be programmed in terms of that logic - which is not unusual.

That said, a programming language built from the ground on some logic formalism like Prolog is a great tool.


This can probably done with CSS. Also assembly language. You can do it with assembly language as well.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: