The OP says his PyGreedyAP gets 96.8% accuracy in 12s (vs NLTK's 94% at 236s)
You can work your way down the vast number of nltk modules, and you'll find almost none of them are useful for real work, and those that are, ship a host of alternatives that are all much worse than the current state-of-the-art.
nltk makes most sense as a teaching tool, but even then it's mostly out of date. The chapter on "Parsing" in the nltk book doesn't even really deal with statistical parsing. The dependency parsing work referenced in this post is almost all 1-3 years old, so obviously it isn't covered either.
As an integration layer, nltk is so much more trouble than it's worth. You can use it to compute some of your scoring metrics, or read in a corpus, but...why bother?
I'm slowly putting together an alternative, where you get exactly one tokeniser, exactly one tagger, etc. All these algorithms have the same i/o, so we shouldn't ask a user to choose one. We should just provide the best one.
My previous post, on POS tagging, shows that ntlk's POS tagger is incredibly slow, and not very accurate: https://honnibal.wordpress.com/2013/09/11/a-good-part-of-spe... . nltk scores 94% in 3m56s, my 200 line implementation scores 96.8% in 12s.
I used to use nltk for tokenisation and sentence-boundary detection, but this library seems better for that: https://code.google.com/p/splitta/
Anyone have thoughts on how they or other libs compare to openCV?
I expect a successful patch request would take a lot of work. That's not a criticism; it's just how it is, given the aims, history and status of their project.
he talks about averaged perceptron at the end (Lectures on generalized log-linear models - GLM http://www.cs.columbia.edu/~cs4705/)
perceptron tagger code (hw4 solution) can be found here https://github.com/emmadoraruth/Perceptron_Tagger.git
You can also check out the great online NLP course taught by the author and Prof. Chris Manning from Stanford: https://www.youtube.com/watch?v=nfoudtpBV68&list=PL6397E4B26...
You could try Hal Daume's blog, and Bob Carpenter's blog.
It's super fast (thanks to C) and very accurate (thanks to Deep Learning approach). The license is not for commercial usage though.
SENNA can be used with NLTK: http://pydoc.net/Python/nltk/2.0.2/nltk.tag.senna/
ClearNLP has a lot of nifty bells and whistles that would make a big difference. In particular, it selects the model for you, based on your text, and its similarity to various subsets of the training data. So, you get a model more matched to your data, which will improve real-world accuracies a lot.
Also, the work he describes requires that the text already be tokenize and POS-tagged which will add some time, but not too much (and are also functions which can be performed by NLTK).
Python3 came out in 2008. Right now the year is 2014. Assuming this is pretty new code, what reason could there possibly be for not using Python3 for this?
Python3 used to come without a lot of the "batteries" that make Python a useful language for science-y stuff (numpy, matplotlib, Cython),and the 'improvements' that Python3 brings are not big enough that people would switch over.
Contrast this to C++11, which brings real improvements to pain points that existed before (i.e., areas of the STL that ought to have been standardized but were not).
Contrast this to Java 6 (Generics) and Java 8 (Lambdas) which solve actual perceived pain points that many people who program in Java are feeling.
The biggest pain point in Python2-the-language isn't any missing language feature -- most people are happy with those since at least 2.6. Instead, it's speed, and people are indeed transitioning part of their programs from Python2 to Cython. Python3 doesn't do anything for speed.
I've not really worked with Python 3 yet, and I've regretted not basing a recent project, http://cloze.it , on it. I was getting unicode problems that Python 3 addresses pretty well.
Also, OPs model only runs unlabeled dependency parsing. Most applications require labeled dependency parsing, which is much harder. State of the art results for English are currently ~93% established by Joakim Nivre and Yue Zhang in http://www.sutd.edu.sg/cmsresource/faculty/yuezhang/acl11j.p... and based on the zpar parser framework (see http://www.cl.cam.ac.uk/~sc609/pubs/cl11_early.pdf ).
zpar ( http://sourceforge.net/projects/zpar/ ) is the fastest dependency parser I am aware of, and it achieves lower parsing rates.
In all papers, note how many more feature templates are specified. More recent work contains yet another order of magnitude more feature templates. I'm betting python (w/ or w/o Cython) won't last very long as competition.
All that being said, the most significant problem in this part of NLP is that the best corpus files required for training modern accurate models are very expensive to license for both research and commercial purposes (tens if not hundreds of thousands of $s).
Note that the Stanford label set has 40 labels, so there are about 80 classes to evaluate. The Penn2Malt scheme has 20 labels, so you need to be careful which dependency scheme is being referenced when run-time figures are reported.
The way the run-time cost works is, if you extract f features per transition, for c classes, with a beam of size k and n words, you make O(cfkn) feature-lookups, which is the main cost.
For the parser.py implementation, most of the speed is coming from greedy search (k=1), and low number of classes (c=3, instead of c=80). Number of feature templates, f, is similar between this parser and zpar. We could add some more templates for label features, and do labelled parsing, and gain about 1% accuracy here, at the cost of being about 40x slower. The only reason I didn't was that it complicates the implementation and presentation slightly. The implementation was all about the blog post.
The Cython parser does everything with C data structures, which are manually memory managed. I don't think I'm paying any language overhead compared to a C++ implementation. So you're absolutely right that as more feature templates stack on, and you use more dependency labels, speed goes down. But, the Cython parser has no problem relative to zpar in this respect.
A significant problem with unlabeled dep. parsing is that you can't differentiate important things like subject vs object dependents. In the sentence "They ate the pizza with anchovies.", how would a program distinguish between 'they' as the subject and 'pizza' as the object? In other words, who ate what?
Goldberg et al, "Efficient Implementation of Beam-Search Incremental Parsers". ACL 2013. http://www.aclweb.org/anthology/P13-2111
I haven't been able to work out how to do the feature caching in a way that won't ruin my implementation when I need to add more features.
I also get substantial benefit at high k from hashing the "kernel tokens" and memoising the score for the state.
I did try the tree-structured stack that they recommend, but I didn't find any run-time benefits from it, and the implementation kept confusing me. I might have made a mistake, but I suspect it's because my state arrays are copied with low-level malloc/free/memcpy, where they pay Python overhead on their copies.
I didn't see noticeable improvements from TSS either. I did some performance tuning - much more time goes to feature extraction and scoring. Can you elaborate on what you mean by 'hashing the "kernel tokens" and memoising the score for the state'? Are the kernel tokens something like the head of stack/queue?
For feature caching, I went with a generic model for a feature template as a combination of feature elements (for features like S0t+Q0t+Q1t) that have a closed set, so the feature template is limited to a set that is a cartesian product of the elements' sets. When you initialise parsing for a new sentence, you can select a subset of the possibilities to generate a "submodel" for only that sentence. That way you need much less memory. If you can pack it properly you can get a lot of it into the lower level caches which should allow for significant speed up.
The memoisation I refer to is called here:
What happens is, I extract the set of token indices for S0, N0, S0h, S0h2, etc, into a struct SlotTokens. SlotTokens is sufficient to extract the features, so I can use its hash to memoise an array of class scores. Cache utilisation is about 30-40% even at k=8.
While I'm here...
The big enum names all of the atomic feature values that I extract, and places their values into an array, context. So context[S0w] contains the word of the token on top of the stack.
I then list the actual features as tuples, referring to those values. So I can write a group of features with something like new_features = ((S0w, S0p), (S0w,), (S0p,)). That would add three feature templates: one with the word plus the POS tag, one with just the word, one with just the POS tag.
A bit of machinery in features.pyx then takes those Python feature definitions, and compiles them into a form that can be used more efficiently.
But the Cython parser is not doing labeled parsing.. a significant number of feature templates are relevant to labeled parsing.
To reach Nivre and Zhang's 2011 results I think you'll find labeled parsing will degrade performance significantly and you'll need more complex code
1. you have more than an order of magnitude more transitions (80 vs 4)
2. you need to add beam search, with early update for training. beam size = 64 to be consistent with zpar
3. an order of magnitude more feature templates, with more complex features
So now you'll be evaluating 20x more transitions, on ~64x more candidates, with ~10x more feature templates. You can't multithread properly in python, you can maybe get some more juice by using some form of SIMD (w/ or w/o GPU). Your model is much larger in memory, so any luck you had in L2/L3 caching is probably gone. Python has limits and this is the point you start hitting them.
Edit: I see you've updated your comment, I guess we're on the same page. It's nice to see a concise implementation. I wrote one in golang for my masters thesis. I achieved perfect parity with zpar, but it took two order of magnitude more LOC.
Parser Accuracy Speed (w/s) Language LOC
Stanford 89.6% 19 Java > 50,000
parser.py 89.8% 2,020 Python ~500
Redshift 93.6% 2,580 Cython ~4,000
Are these the labeled parsing results you are referring to? How many sents/sec? Using same PTB data sets as Zhang and Nivre '11?
The data sets are the _Stanford_ labels, where the main results in Zhang and Nivre refer to MALT labels. Z&N do provide a single Stanford accuracy in their results, of 93.5% UAS.
Sentences per second should be just over 100. I use k=8 with some extra features referring to words further down the stack, where Z&N use k=64. Right at the bottom of the post, you can find the commit SHA and the commands for the experiment.
I'd start with Yue Zhang's papers for work on this: http://www.sutd.edu.sg/yuezhang.aspx
Check us out https://www.repustate.com
We also do this for Arabic, which is equally challenging.
What does that mean, "tell extract"? Not familiar with this terminology.
For researchers outside of NLP, it's often actually worse to have your parser be 2% better than the previous work, for reasons your readers don't care about and you can't easily explain. If your readers have heard of the Stanford parser, and previous work has used it, it's likely a good choice for your experiment.
Basically, if people are always using the new hotness outside of NLP, then those non-NLP researchers have to keep learning the new hotness! Ain't nobody got time for that.
I do think we're at a good "save point", though, where we should get people updated to the new technologies. Hence the blog post :)
As for use-cases, mostly people will use labelled dependency parses, because why not? And they're mostly used inside other NLP research, for instance I've been working on detecting disfluencies in conversational speech, there's increasing work on using this stuff in translation, information extraction, etc.
Does it work on Windows too / does it rely on Unix-only constructs?
Why not include some experiments with Lua and lpeg?
It would probably be faster than Java or Python.
And arguably Lua is easier to learn.
Maybe the work required (one-time cost) would be rewarded with significant gains.
Why wouldn't they?
The interpreter is about 8K.
The libpython shared library is about 1.5MB.
If what you are referring to as the standard library is in that 1.5MB, then disregard my comment on LOC.
If it's in that remaining 12MB or so of stuff, then I'm wondering if LOC counts should include what is in there that is required for these programs to run.
Look at it this way. If I download 12MB of code and then I write 500 lines, does that mean I am a master of writing small, compact code?
Sure, if you ignore the 12MB I had to download first.
I'm not singling out Python. Perl, Ruby, etc. are equally large.
The point is you are downloading 1000's of LOC to enable you to write "short" programs.
Nothing wrong with that. But those 12MB that were needed beforehand... should we just ignore all that when we count LOC?
Maybe one has to do embedded work to have an appreciation for memory and storage limitations and thus the sheer size of these scripting libraries.
The "overhead" I'm concerned with is based in hardware, not my own creativity.