The thing is, recursive descent, already fairly hard, seems several steps easier to me than a construct like function-derivatives (or parser-combinators).
And as far as I call follow this paper, the process seems similar to recursive descent except instead of generating code, you generate a bunch of the insides of the code and apply it as an array or something.
And that's thing. The view of process is nice and regular for someone who likes to look at languages abstractly but you have to be leveraging a lot of the laziness of functional languages to make this work.
So altogether, I tend to question whether approaches are really worth the effort.
I've never managed to find a parser combinator library that I love outside of the Haskell family of languages. I'm not sure why that is.
I have some crude Python code that (kind of) illustrates the brevity and elegance of the idea: https://gist.github.com/calroc/ad4321dc4c43d5f3e09d39283461b...
See https://github.com/pygy/compose-regexp.js which uses a parser combinator API with regexps as backend.
combine-regexp is a parser combinator lib backed by regexps.
Traditional parser combinators are just a recursive descent parser DSL, with a 1:1 translation between both representations.
> Traditional parser combinators are just a recursive descent parser DSL, with a 1:1 translation between both representations.
Yes, exactly - a recursive descent _parser_ recognises a grammar and produces the parse result for the grammar. Parser combinators take little parsers and make bigger parsers.
combine-regexp doesn't make parsers, it makes regular expression grammars, that you then feed to some _other_ parser that actually does the parsing based on that grammar.
The library combines regular grammars that need an interpreter (the `match` method, which is bundled with the regexp object), whereas parser combinators work on code written in the same language (that is ultimately interpreted by the CPU).
Edit: the parser in this case is a `(Regexp.prototype.match, regex.source)` tuple, whereas in the case of a praser combinator lib in a language that compiles to machine code, it would be `(call instruction, instructions list)`.
Since a context-free language is a recursive regular language, it is tempting to use the same code for computing the derivative. From the perspective of parsing, this has two chief drawbacks: 1. It doesn’t work. 2. It wouldn’t produce a parse forest even if it did.
> Since a context-free language is a recursive regular language, it is tempting to use the same code for computing the derivative. From the perspective of parsing, this has two chief drawbacks:
1. It doesn’t work.
2. It wouldn’t produce a parse forest even if it did.
The fact that you can splice the Kleene star part of the dRE algorithm into the unification algorithm suggests that you should be able to do the same with the other parts of it and be able to specify union types and intersection types. I'm kinda out of my league I think, and a lot of the papers that seems relevant are behind paywalls.