One of the reasons C is the way it is is that that way is the way the processor is.
The abstraction isn't perfect, but it is decent enough for a first approximation. I'm guessing Fortran is similar. Imperitive languages of that era were used because they worked very closely with the actual actions of the CPU and gave a fine grained control over execution, allowing efficiencies. Since then, popular languages have all been improvements that take advantage of increased computing power to do interesting things.
Sometimes new paradigms rear up - eg, Clojure with its set-and-forget approach to values. That paradigm is sold on the basis that it is easier to reason about, debug and as a practical bonus faster in many cases than the alternative. This is a great reason to learn a new paradigm.
Prolog is an experiment, but has anyone found a domain for the paradigm of logic programming? I've used it, and I couldn't see any domain where it seemed to have an advantage. For AI in particular - the way the world is developing, it seems logic is a weak foundation for intelligence. At this point, given that we know humans do not use logic for their intelligence, it would be a little surprising if logic is the best way to approach that particular problem. In most domains, the error in your sensor overwhelms the existence of fact and the 'rules' are more what you'd call guidelines.
If I rule out AI, what field is Prolog especially suited for? It doesn't read like it has a natural advantage in debugability (quite the reverse imo), GUIs, stream processing, efficiency, etc. With a lack of a killer domain, why not implement it as an afterthought library in C or lisp, and keep the imperative languages we have so much research in?
Basically, the default state of Prolog is dead. It doesn't represent how humans want to think about programming, it doesn't represent how the computer implements programming and so if it doesn't have a domain where its paradigm is spot on perfect it isn't going to take hold. It is an interesting idea and might yet be useful, but someone will need to find a reason to justify it.
Prolog is useful everywhere proofs are useful. Now, a proof isn't what you may think of as a proof in the general sense.
One example of a proof that Prolog is good at is solving games, such as solving a particular instance of tic tac toe, or 4-in-a-row. Beyond these toy examples, you can see a good fit in any application which needs to follow a procession of rules. These rules may be in a feeding/bleeding relationship — this is essentially constraint logic programming. The problem goes: given these constraints, does this statement fulfil the constraints? How can I make the statement fit the constraints? Which is the set of statements that fit these constraints?
I know of an industry application in German luxury cars, where the customer can order a car with a lot (a lot a lot) of different features, but some are mutually exclusive. If you have this particular leather seat, you can't have that particular heating unit or whatever. Now tell me if the customer selected a valid configuration, and if not, tell me what the valid configurations "next to" the desired one are.
Another application are DCGs (Definite Clause Grammar), which have the expressive power of regular expressions (EDIT: see below for a correction, it appears they're at least context-sensitive.) If you've done Monadic parsing, DCGs feel kinda similar. They're super-close to Phrase Structure Grammar rules, and used to be part of my linguistics curriculum (I hear it's all ML nowadays. Pity.)
Yes, Prolog fits niche domains. But if you find a domain, such as regular languages or a problem that can be solved by CLP, the elegance of logic programming can hardly be beat. It's really a joy to work with in the right domain. It's absolutely awful as a general purpose programming language, though — mostly because the debugging is so complex, it almost feels nondeterministic (it isn't, but there were many moments at 2 in the morning where I could've sworn it is.)
In principle, Prolog is great for searching for solutions of a formally posed problem. Think of it as a logic search engine. If your problem can be stated in logic, and the logic problem can be evaluated in a finite time, Prolog's your friend.
> DCGs (Definite Clause Grammar), which have the expressive power of regular expressions.
DCGs have a lot more expressive power than regular expressions. They are at least context-sensitive (i.e., stronger than context-free grammars as well). But off the top of my head I don't see why they wouldn't be able to capture recursively enumerable languages (i.e., the languages of completely unrestricted grammars) as well.
> Think of it as a logic search engine.
Think of it as a logic search engine embedded in a general-purpose programming language, because that's what it is.
> Think of it as a logic search engine embedded in a general-purpose programming language, because that's what it is.
I think this is the whole problem. The Prolog language is not great as a general-purpose language, and the "logic search engine" doesn't need be so closely identified with a particular language. Prolog should be a library, not a language, and not the almost sole representative of a programming paradigm.
The trouble is that your program can become non-terminating in all but the regular case IIRC. Man it's been 10 years since I've done Prolog, so take what I say with a grain of salt.
> For AI in particular - the way the world is developing, it seems logic is a weak foundation for intelligence. At this point, given that we know humans do not use logic for their intelligence, it would be a little surprising if logic is the best way to approach that particular problem.
Whoa, "humans do not use logic for their intelligence"? Hold on there. How do we know that?
I suspect the line of reasoning is something like: if machine learning is massively successfully without a great deal of logic or deductive rules, then the human mind works similarly.
Which stems in turn from thinking the brain is just like a computer, and vice-versa, which is arguable (let alone lamentable).
I almost finished implementing a Prolog-like system to determine what operating system is installed in a guest image (replacing a huge chunk of C code which had accreted lots of rules and special cases). Described in a bit of detail here: https://rwmj.wordpress.com/2015/12/06/inspection-now-with-ad...
This never went upstream for a couple of practical reasons:
(1) It wasn't much simpler than the C code. In fact once you've gone and explained to people how Prolog works they might as well have digested the C code.
(2) There wasn't a good, embeddable, widely-available Prolog "engine" we could use. As a result I wrote my own engine which only did the simplest sort of forward inference and so wasn't at all efficient.
Not at all on topic, but this is the first time I've seen double repetition like that in the wild. Makes me wonder about examples of non-contrived buffalo sentences.
I got my start with semantic web/linked data using semweb library and SWI-Prolog. I think that RDF data stores (and more general graph databases like Neo4j) scale better, but semweb is great for experimenting.
because at some point you want to have business rules that use your graph and make a pretty UI and write a graph renderer and implement GraphQL and all of those would be horrible to do in prolog so prolog would have to be the backend of a graph API that you would use in $GENERAL_PURPOSE_LANGUAGE and at that point it may not be the fastest.
Prolog has a great programming model, but atrocious efficiency. Programming anything beyond 8-queens quickly becomes an exponential combinatoric minefield.
> it seems logic is a weak foundation for intelligence
What an offensive statement.
Over two thousand years of fighting intuition-based judgements, and now that DL can merely tell a gorilla from a mouse (with 712 hours of training on 91 GPUs), it is best form of "intelligence" but Logic - a fundamental part of mathematics that helped us u.n.d.e.r.s.t.a.n.d the universe better - is a "weak foundation for intelligence?
Logic (needs to be specified by GP, but assuming predicate logic, as we're talking prolog here!) can't hold probabilities (very well) or contradictions.
And you have very faint hopes to have a DL system explain to you why it thinks it is a gorilla. Each system has its weakness.
Still, logic systems are a lot closer to reasoning and understanding (if not for the machine itself, at least for its operators), in other words closer to intelligence than other systems.
I strongly believe that what you don't want for an AI is for it to make the same mistakes and have the same weaknesses as humans.
For instance, people will never accept that the AI of a car kills by accident, even thought human drivers kill a lot more etc. A system that makes life-or-death decisions but has no liability nor ethics is difficult to accept.
The article talks about expectations - unrealistic expectations even. People expect from AI that it makes clever decisions like humans and "no mistakes" like machines (scare quotes for bugs, failures etc.).
Now, I don't suggest that DL is useless. It is good at what it does, but contrary to what OP suggests, I don't see it as the sole or even main component of an AI system.
There is some increasing interest in logic languages recently, mostly as an extension of DL/ML approach - probabilistic logic for example. Surprisingly easy probabilistic systems can be built with Prolog [1].
The plague of modern software engineering is "there are no updates, it must be unmaintained". This attitude makes tons of solid, old, working software seem "outdated" and creates a cultural momentum towards new, shiny, broken shit. The result is ecosystems like js. Maybe we should believe software can be complete?
You definitely get +1 for writing the most true comment I have read today. This also happens in the Common Lisp world: there are old libraries that are very solid and useful even if they have not been modified for a while.
Years ago, I thought of writing a science fiction story predicated on the idea that in the far future that the world would run on ancient software that was proved correct and made perfect by being debugged over the centuries.
For example, OpenSSL is a very strong force behind what we call "internet", without something like OpenSSL we wouldn't have internet today. When heartbleed was found in OpenSSL the dynamic wasn't to retire OpenSSL and migrate to GnuTLS or something else, or even something new, but to patch OpenSSL. Why, because even though OpenSSL development is silent, we know that it IS maintained. If we find another heartbleed today, nobody will want to replace the entire cryptographic infrastructure of the internet, instead fix OpenSSL.
Another examples: zlib, SWI-Prolog, most BLAS implementations, some GCC backends, Concord (heuristic-based TSP solver), most parts of algo based libraries (CGAL, Eigen, GMP, GSL etc...) (of course these libraries are implementing new features, but there are some parts of codebase that didn't receive any commit for years)...
I think good signals to tell a package is complete: (1) there are no known critical bugs (2) if we found a critical bug today, we're reasonably sure we will get a fix in reasonable amount of time (3) project and project goals are well-documented (4) everything documented as goal is implemented i.e. there is no on-going development happening.
core.logic does have pretty clear maintenance commits when needed, so you are wrong there. They are just not many changes needed.
Yet there is a list of very useful features discussed in the repository that does not advance very quickly, mostly due to David Nolen being busy with other great work (clojurescript, om, etc.) lately.
I’m a heavy user of core.logic in my personal project, and I would like to see the project gain more powers; I just didn’t /need/ them yet. I’m thinking on dedicating some time to it. Perhaps you can too!
As I was seeing another post, it seems like Prolog's advantage would be as an embedded DSL. So you can write the logic parts in Prolog, then switch to C++ or Java or whatever for the GUI.
Interesting idea that FGCS hype killed Prolog; on the Lisp side, we tend to think that AI hype killed Lisp. Also interesting that the author considers Prolog a competitor to Lisp, when (I think) most programmers don't even think of Prolog. Similarly, while we Lisp programmers think of Algol-like languages as our great competitors, I daresay the vast majority of C, Python, C++, Perl, Java, Go &c. programmers don't even think of Lisp.
I think one can just run a Prolog engine in Lisp these days, which would get one the best of both worlds. But I'm not a Prolog programmer, so … perhaps I'm incorrect.
If you reread my comment, you will notice my claim was the other way around. I was trying to point out that the encompassing argument is pointless but showing that neither lisp nor prolog can evaluate full Java code (because the reverse is certainly assumed to be possiblr, of course there is jvm lisps and prologs).
But really which language is bigger is pointless to begin with.
I can't remember where I heard this, but I was told that the death of logic languages occurred due to the poaching of the majority of good active researchers and developers by the big database companies.
Are you saying researchers belong to their university? Or are you saying once someone is a researcher they are no longer allowed to move to industry?
Also in my experience a lot of researchers are grad students, who move to industry on completion of their degree - does getting a job after graduating mean you were “poached”?
Is this a serious comment? Poaching is a pretty common term for actively hiring someone, especially groups of people from another position of employment. Similar to head hunting not referring to actual collection of heads.
If you are interested in logic programming languages (like Prolog) and in functional programming languages (like Haskell), you should definitely take a look at the Curry language.
For people who know Haskell, such a type definition should look very familar:
data IntTree = Leaf | Branch Int IntTree IntTree
The mind-bending stuff starts when you see "function" definitions like
:-) That's mind bending if you don't know logic programming. So how can the above statement even make sense? You have to understand that a logic program will try all possible statements until it finds a match, or simply all possible statements if its job is to do an exhaustive search of the problem space.
So in the choose function above, the execution algorithm (the runtime, as it were) would try to choose x first, and see if that yields a result that matches the query. It then chooses y. This may yield no, one, or many results, depending on the exact constraints of the search space.
If you posed the problem as choose 'a' 'b' then the answer would be 'a' if you're content with any solution (it gives you the first it finds), or ['a','b'] if you wanted all solutions.
Man, now I wanna go do some Prolog… pity there's nobody who's gonna pay me to do it, and I have to do PHP and JS.
Interesting read. The mention of the book “The Fifth Generation” by Edward A. Feigenbaum and Pamela McCorduck (that I read in 1983, and was totally into at the time) made me think of Kai-Fu Lee's new book "AI Superpowers: China, Silicon Valley, and the New World Order" that markets the idea that China will win the AI war because they have more data and entrepreneurs and established companies who go deep with developing online to offline businesses (invest heavily in supporting physical infrastructure).
History is not kind the the Japanese 5th generation project. My hunch is that Kai-fu's predictions will be more accurate than Edward's and Pamela's predictions.
I used to use Prolog a lot, now I just occasionally play with it. Contrasting to Lisp: until I started my current job 15 months ago (Python ML), Lisp was one of the cornerstones of my work and consulting business for 35 years.
Maybe I have the history wrong but it seems to me that researchers moved on from classical logic to lambda calculus and intuitionist Martin-Lof type theories and associated programming languages (ML, Miranda, Ocaml, Haskell, Coq).
Am I wrong in thinking that Coq for example can be considered a replacement of Prolog?
Languages like Prolog and Haskell represent different views on the relation between logic and computation. Prolog models computation as proof search, whereas Haskell models it as proof normalization. In the former case your program basically consists of a bunch of formulas that are used as assumptions in an attempt to answer a query at runtime. With Haskell, your program consists of terms, which correspond directly to proofs under the Curry-Howard isomorphism. From the type theory perspective, with functional programming languages like Haskell the programmer writes terms (proofs) of some given type (formula), whereas with logic programming it's the runtime that tries to answer whether a given type is inhabited (i.e., if a proof exists). I'm being a bit sloppy here in that I'm disregarding the differences between classical and intuitionistic logics, although the former as well have been studied in type theory via their double negation translations.
I think Girard's ludics, among other goals, tried to unify these two views, although I know too little about it.
Finally, note Coq, like Haskell, models computation as proof normalization, although it's design goals are very different from Haskell.
> Am I wrong in thinking that Coq for example can be considered a replacement of Prolog?
I don't know if you're right or wrong about that. But Coq is about as "dead" as Prolog is. That is, it isn't used very much for real-world programming. It doesn't look like it's going to become more alive very soon, either.
Even if Coq and Prolog can be considered as "in the same style/family", maybe the problem is that for most problems, most people just don't want to program that way?
I suspect that most of programming languages in common use today started out as academic exercises that weren’t used for real-world programming and that most people didn’t enjoy programming in.
FWIW This article has been posted/discussed 3 times over the last 8 years. A follow-up article was also posted/discussed once. To see them just search for "Who killed Prolog" using the HN search function at bottom of news.ycombinator.com.
Side-note: The tutorial calls the condition the 'body' and the resulting facts as 'head' took me awhile to absorb since my brain can't stop thinking of the common syntax: if(conditions){body}
That tutorial is fast, but it's a pretty bad tutorial that will teach you terrible habits and a very incomplete understanding of Prolog. Unfortunately there are no good free ones.
Many people seem to like https://www.metalevel.at/prolog (featured here several times). Although I don't think it's very good from a didactic point of view, many disagree, and at least the author really knows what they are talking about.
I have a PDF of the fifth edition of "Programming in Prolog" by Clocksin and Mellish lying around here. I could swear it was available for free as open access for a while, though a very quick web search doesn't confirm this.
Thanks. Looks like a good proper learning material for Prolog. However, I just needed to get a feel or taste of what Prolog is without spending too much time reading concepts.
Thanks for the book suggestion. Quick duckduck for it gives me this pdf:
The abstraction isn't perfect, but it is decent enough for a first approximation. I'm guessing Fortran is similar. Imperitive languages of that era were used because they worked very closely with the actual actions of the CPU and gave a fine grained control over execution, allowing efficiencies. Since then, popular languages have all been improvements that take advantage of increased computing power to do interesting things.
Sometimes new paradigms rear up - eg, Clojure with its set-and-forget approach to values. That paradigm is sold on the basis that it is easier to reason about, debug and as a practical bonus faster in many cases than the alternative. This is a great reason to learn a new paradigm.
Prolog is an experiment, but has anyone found a domain for the paradigm of logic programming? I've used it, and I couldn't see any domain where it seemed to have an advantage. For AI in particular - the way the world is developing, it seems logic is a weak foundation for intelligence. At this point, given that we know humans do not use logic for their intelligence, it would be a little surprising if logic is the best way to approach that particular problem. In most domains, the error in your sensor overwhelms the existence of fact and the 'rules' are more what you'd call guidelines.
If I rule out AI, what field is Prolog especially suited for? It doesn't read like it has a natural advantage in debugability (quite the reverse imo), GUIs, stream processing, efficiency, etc. With a lack of a killer domain, why not implement it as an afterthought library in C or lisp, and keep the imperative languages we have so much research in?
Basically, the default state of Prolog is dead. It doesn't represent how humans want to think about programming, it doesn't represent how the computer implements programming and so if it doesn't have a domain where its paradigm is spot on perfect it isn't going to take hold. It is an interesting idea and might yet be useful, but someone will need to find a reason to justify it.