Note that the complexity is still O(n^3), which is higher than that of other algorithms for most "real-world" grammars.
I read Might's initial paper on derivative parsing a while ago, and I agree that it really seems very elegant at first when compared e.g. to bottom-up parsing. The problem of the approach though is that parse tree generation is more tricky than for other methods and (IMHO) renders the technique considerably less elegant. Of course, if only recognition is needed then this technique is straightforward, but in most practical use cases the user will want to generate a parse tree as well.
Also, the performance of the parser implementations dicsussed in the paper (and in Might's original work) are able to parse a few hundred lines of code per second (for a small language), which is very slow compared to e.g. generated recursive-descent or bottom-up parsers, which can achieve several hundred thousand of lines of code per second for real-world languages.
So although it's an interesting concept I personally think it will not be of very high relevance to practical parser generation, at least for use cases where efficiency and scalability matters.
I have been working on a descriptive parser generator though to see if I can get anywhere near the performance of my currently used PEG parser (which uses conditional memoization -i.e. packrat parsing- to speed up the parsing).
Also, like other people already have pointed out, the time complexity calculations in the academic literature are not always relevant for practical parsing:
1) A parser that might have exponential-time complexity might still beat a polynomical/linear-time complexity parser in practice, depending on the grammar and actual code parsed.
2) For real-world grammars, constants (i.e. the c in O(c*n)) will be the determining factor between two alternative parsing techniques with identical time-complexity. As an example, memoization-based (packrat) parsing in combination with PEG achieves the same time complexity as shift-reduce style parsers for many grammars, but in practice the memory allocations and bookkeeping required to do the memoization make the latter approach much faster.
I have implemented several different parsers myself, and while it is pretty straightforward to write a parser for a real-world language (e.g. Python) today, achieving very good parsing speed is not. As an example, on my machine, the Python parser can process about 100-300k lines of code per second (including AST generation) while a comparable packrat PEG parser is slower by a factor of 5-10 (for many use cases, parsing at 10.000 loc / second is still good enough though, but it should not be considered fast)
I read Might's initial paper on derivative parsing a while ago, and I agree that it really seems very elegant at first when compared e.g. to bottom-up parsing. The problem of the approach though is that parse tree generation is more tricky than for other methods and (IMHO) renders the technique considerably less elegant. Of course, if only recognition is needed then this technique is straightforward, but in most practical use cases the user will want to generate a parse tree as well.
Also, the performance of the parser implementations dicsussed in the paper (and in Might's original work) are able to parse a few hundred lines of code per second (for a small language), which is very slow compared to e.g. generated recursive-descent or bottom-up parsers, which can achieve several hundred thousand of lines of code per second for real-world languages.
So although it's an interesting concept I personally think it will not be of very high relevance to practical parser generation, at least for use cases where efficiency and scalability matters.
I have been working on a descriptive parser generator though to see if I can get anywhere near the performance of my currently used PEG parser (which uses conditional memoization -i.e. packrat parsing- to speed up the parsing).