I haven't done it, but I hear its excellent.
That said, looking back, I'm not sure it encourages the best coding style or the best idioms. It's perfect for getting your head around the paradigm though!
I'm currently writing an analogous tutorial for implementing Prolog in Haskell, but it's on a bit of a hiatus thanks to some school and research related deadlines coming up. Hopefully people are interested in something like that :).
Interestingly enough, the Prolog interpreter is actually shorter than the simple Scheme one--largely because we can take advantage of the list monad for the backtracking. I think it's still a very cool exercise.
I wrote it as a project work in an Artificial Intelligence class. It very closely resembles the logic programming language implemented in "Structure and Interpretation of Computer Programs".
I'm sorry there isn't any documentation for it.
Here's a little example of what it looks like, from examples/royalfamily.slo:
(siblings ?x ?y) <-
(parent ?x ?parent) & (parent ?y ?parent)
(grandparent ?x ?y) <-
(parent ?x ?z) & (parent ?z ?y)
(cousins ?x ?y) <-
(grandparent ?x ?gp) & (grandparent ?y ?gp) & !(siblings ?x ?y)
(I'm only being partly tongue-in-cheek - that's mentality behind idiomatic Lisp).
"Scheme" has many different dialects as well. Racket is _a_ Scheme, Edwin interprets MIT's version of Scheme, then there is Gambit Scheme. There are certainly other Schemes, and the implementation of Scheme on Haskell isn't going to be the same as the other dialects listed here.
There are quite a few examples of Prolog-to-Lisp interpreters, but no one would argue that any of these _are_ Prolog.
Once you are comfortable with Haskell the code makes much more sense and is a great starting point for writing an interpreter of your own.
Very cool read, though.
What do Haskell or CL implementation lack to be competitive with good old C++ there?
While CL and Haskell can be extremely fast and even outperform C++ in certain situations, the lack of manual resource management makes them ill suited to writing a database system. There are few, if any, applications where this matters so much.
Consider Neo4J, Cassandra, CouchDB, Riak, Voldemort, all major players in the current NoSQL world. All use Java & Erlang.
Just how much computation does your database need to do, unless it's some special in-memory database like Redis? If you have your data on ordinary disks, you'll spend most of the time waiting for them.
map (head &&& length) . sort . group
filterM (const [True, False])
If you want, I can explain these or give other examples of expressiveness I don't think you'll find in D.
http://stackoverflow.com/a/10410997/578288 – the `map` snippet converts a flat list into a multiset represented as a list of `(count, element)` tuples
http://stackoverflow.com/a/5378799/578288 – the `filterM` snippet returns the powerset of its list argument (http://en.wikipedia.org/wiki/Power_set)
Has there been significant improvements in the past couple of years?
The thing I like about D is that there's no surprises, everything works the way you'd expect it to. You can guess at what code should look like, and chances are it'll work.
Here's a scala to haskell
A primitive arithmetic interpreter and the set primitive (why?) is not even close to what Lisp is.
Lisp is not just a prefix notation with parenthesis. It begins with the notion that code is data - the list structure and that everything is a symbol, a reference and that a value has a type, not a variable, which, together, gives us macros for free.
An implementation of a prefix-notation calculator? Well. It is less than one screen of Scheme code, and is a part of one lecture of freshman's CS61A course.
btw, people who have seen Scheme interpreter from SICP and capable to appreciate its beauty and elegance (structure, clarity, compactness) would never try to do something like that.)