
Lisp Prolog and Evolution - dhkl
http://blog.samibadawi.com/2013/05/lisp-prolog-and-evolution.html
======
octo_t
I absolutely love programming in Prolog. I've never needed to write anything
large in it at all (which is where most prolog interpreters fail), but when it
comes together so beautifully at the end, its quite amazing.

I thoroughly recommend "The Art of Prolog" which is an engaging and fun read.

~~~
dhkl
When I first learned about Prolog in AI class, the first thing that struct me
was how beautiful Prolog/logic programming solutions are.

In non-declarative languages like Java, you have to build up the different
pieces of computations, while you keep maintaining a mental picture of how the
various pieces fit together.

In Prolog, you declare the entities and the relationships/constraints between
them, and the system will build the solution for you through inference.

David Nolen has done some awesome work writing the core.logic[1] library,
which makes those Prolog gooodies accessible to Clojure programmers.

\-- [1] <https://github.com/clojure/core.logic>

~~~
arethuza
I seem to recall that one of the issue with Prolog is that, above a certain
level of complexity, you can't really write purely declarative Prolog
"progams" - you have to start considering the procedural aspect as well which
(in certain cases) can be non-trivial.

~~~
yaantc
(See my other comment below too on this topic)

You indeed have to understand how the Prolog engine works in order to
structure the declarative statements of your code in a way that will lead to
an efficient execution.

A way to see a Prolog programs is as a sequence of facts and statements, and
at least one query. The Prolog engine will then search for a solution that
fits the given facts and requirements and answer the query. In a way the
Prolog engine will search a space of possible answers to the query to find the
one(s) that match the given facts and requirements. The key to speed is to
structure the facts and statements that the search will fail as early as
possible when a wrong path is taken. This amount to pruning the useless parts
of the search space as aggressively as possible, so that the Prolog engine
does not waste time evaluating options that are doomed in the end.

Before getting this I was often frustrated with apparently nice and correct
Prolog programs that took forever and in effect just looked stuck (at some
point you just stop waiting and abort the execution). I guess it's a pretty
common frustration when beginning in Prolog. But once you get it, it's
possible to come up with efficient code. It's still scary to see that some
small changes in statements ordering can lead to dramatic difference in
runtime. You can have big differences in performance for imperative
programming too, but it's rare that it's so bad that a first implementation is
completely useless. In Prolog it's quite common. And the way to optimize
Prolog performance is very specific, you need to learn to anticipate how the
engine walks the search space. I guess it's one of the big roadblock in the
practical use of Prolog.

------
RBerenguel
I find Prolog a marvel in "look what I can do!" Writing stupidly fun code in
it is almost a no-brainer, compared to other languages (I have a half-assed
English grammar parser in 60-odd lines, with a few comments!) But getting into
prolog needs a complete rewire of how you think about programming: if you are
used to imperative (or functional, or list-based, or almost whatever else)
prolog feels very foreign.

I'm also a Lisp aficionado, having written more than a few small pieces (not
counting emacs lisp, where I write more than a few!) to scrape the web or draw
Lavaurs chords.

And my day-to-day I write PHP, Python, awk, R and whatever it takes to improve
revenues in my company. So I have both perspectives, Lisp & Prolog and its
awesomeness and "the other side." And mixing them up (splurge into data with
awk, write it as Prolog terms, analyse its structure in Prolog and represent
it in R) is one of the biggest joys of programming. Don't get entrenched in
your "language," learn as many as you can and cherry-pick each one as you see
fit for the problem.

~~~
smu
Indeed! During a security assessment of a large code base, I wrote a simple
ruby source code parser that spat out Prolog terms. After that, it was a
breeze to find certain kinds of logic errors, such as code paths that used
user input before sanitation...

It's beautiful when it all works together, checking everything manually would
certainly have been a PITA and would have resulted in a lower quality result.

Definitely an aha moment!

~~~
fabriceleal
Could you open source it? I'm very interested in seeing how one would do that,
I've been thinking about writing one myself, for PHP (I've a somewhat large
codebase to inherit, that only works in a old version of PHP, and I might need
to migrate it).

~~~
smu
I don't have the IP for the code so I can't copy it here, but what I did was
very simple and the gist of it is described below:

The parser used regular expressions to recognise function definitions and
calls in those definitions. I used file names as the function scope, this was
good enough because there were no two functions with identical names in the
same file. Function calls became the following terms: "calls(x,y,z).", meaning
that function x in file y calls z.

These "calls" terms actually define a directed graph. If you google "prolog
path through directed graph", there are lots of hits that will help you out.
The following (untested) code should get you started:

    
    
        %there is a path if there is a call
        path(Caller, File, Called, [calls(Caller, File, Called)]) :- calls(Caller, File, Called). 
        %there is a path if there is a call to some function and there is a path from that function
        path(Caller, File, Called, [calls(Caller, File, A) | P]) :- calls(Caller, File, A), path(A, _, Called).
    

After that, you can find all possible paths with "findall/3" and check for
existence of a certain known good/bad function with "member/2" (again, google
is your friend)

Due to some properties of the code, this simple approach worked well enough
for me. Hopefully this helps you out.

~~~
RBerenguel
Probably needs a green cut in the first term... Or maybe not. I'm quite
backtracking-puzzled about green cuts in generic terms I write, so thinking
about them in others' code is even more puzzling :)

------
swannodette
A few people have asked if the video and slides for this talk are available.
Unfortunately it did not get recorded and the slides don't make much sense
without the words and there was a considerable amount of live coding in a
REPL.

However I'll be attending <http://webrebels.org> where I'll be giving a more
refined version of the talk - it will be recorded.

------
ichinaski
AI Algorithms, Data Structures, and Idioms in Prolog, Lisp, and Java:
[http://wps.aw.com/wps/media/objects/5771/5909832/PDF/Luger_0...](http://wps.aw.com/wps/media/objects/5771/5909832/PDF/Luger_0136070477_1.pdf)

------
oneandoneis2
On the subject of why amazingly-powerful, ahead-of-their-time languages don't
catch on.. I'd be interested to know if a study has ever been done on the
"accessibility" of a language and its popularity.

By which I mean: A total novice, even a non-programmer, can be given a simple
bit of PHP/Javascript, and work out what it does and how to make minor changes
to it. But something like Lisp & Haskell, you just can't do that - you need to
spend some time learning the syntax before you can do anything with them.

I'd be surprised if the ramp-up time to be able to do useful things with a
language didn't have more of an effect than how powerful and useful it is.

~~~
arethuza
"you need to spend some time learning the syntax"

But Lisp has rather _less_ syntax than most other programming languages - and
that's possibly a weakness rather than a strength when it comes to anyone new
to the language.

I suspect there is a sweet spot when it comes to the syntactic complexity of
programming languages - too little and people get lost in the generality and
abstractions, too much and its difficult to remember it all and you end up
with coding standards desperately trying to close down on the features that
should be used (e.g. banning the ternary conditional operator).

~~~
oneandoneis2
> Lisp has rather less syntax than most other programming languages - and
> that's possibly a weakness

Yup: it's like saying that binary is easier than decimal because it has less
digits - the average Joe would still find it easier to do his maths in base
ten :)

~~~
mheathr
Personally, I find both Lisp-1 and Lisp-2 easier to read (when it is pretty
printed) than any other language syntax.

That the syntax or lack there of also expresses itself as an AST in addition
to that is quite elegant.

Due to the reduction in syntax verbosity it is evident in the code what the
important elements are as that is the only information displayed.

Lisp's syntax enables powerful utilities like Paredit to exploit that for
unrivaled transpositions of the code as refactoring occurs.

I do think other syntax designs look more aesthetically pleasing on its face
than what either Lisp-1 or Lisp-2 appear to the novice (the ML family of
languages for instance) but it is something I have come to appreciate as my
proficiency in Lisp has improved.

Code simultaneously being data and the converse as well is another elegant
consequence of Lisp's design, so everything is optimized for the tail, which
is what is important for tools and the environment to provide.

I wish the environment provided in other languages was as mature as that of
Common Lisp (<http://www.cliki.net/development>), because it is immensely
pleasurable to code in once the learning curve is overcome.

The quality of documentation and literature available for Lisp is also
excellent, many of which are classics in Computer Science as a result and are
worth reading regardless of actually using Lisp to develop with.

------
serichsen
The following shows a common misconception:

"There is no reason why you cannot combine strong types or optional types with
LISP, in fact, there are already LISP dialects out there that did this."

Common Lisp already has strong, dynamic, optional types.

"Strong" means: there is no implicit type conversion.

"Dynamic" means: runtime things (objects) have a type, not necessarily the
(static) variables that hold them. This is also called "late binding".

"Optional" means: you need not specify a type for everything; types can be
inferred from context.

~~~
maaku
> "There is no reason why you cannot combine strong types or optional types
> with LISP, in fact, there are already LISP dialects out there that did
> this."

> Common Lisp already has strong, dynamic, optional types.

Isn't that what he said? Common Lisp is a dialect of Lisp.

------
nnq
> David Nolen give a talk at a LispNYC Meetup called: LISP is Too Powerful

...is this up online? does anyone have a link to it?

~~~
dhkl
Slides and videos aren't available at this point. But he is presenting this
again at Web Rebels.

<https://twitter.com/swannodette/status/336816354868989953>

------
ecolak
"Haskell and LISP both have minimal syntax compared to C++, C# and Java". I
agree that LISP does but I can't really say the same thing for Haskell,
especially when you get into the whole monad stuff...

------
MostAwesomeDude
Dunno if the author's around, but Haskell does not descend from Lisps in the
way that he thinks; it is much more closely related to the ML family.

~~~
BellsOnSunday
It isn't descended from Prolog either -- I presumed the author was making a
point about conceptually similar flavours of type systems, semantics etc. and
Haskell's position as a typed Lambda calculus.

