Yes, that is how you make fast libraries in Python. But, nltk isn't written using C extension modules. All of this NLP is done in pure python. You could rewrite what needs to be fast with C extensions, but then what's the point of using nltk in the first place?
Nltk was never intended to be a way to do production-grade natural language processing. It's primary objective has been to teach users natural language processing with clear, well-commented code and documentation. If this isn't your situation, please use something else.
What's the point? That half of your code base has already been written for you. Rewriting performance critical parts is a lot of work, and not having to rewrite a corpus reader, tree transformations, or evaluation procedure is an advantage; aside from being an excellent prototype platform. With Cython you can seamlessly combine Python code such as that from NLTK with your own optimized code. This was indeed never the intention of NLTK, but I have found the general idea of combining arbitrary Python code with optimized Cython code to work very well. The end result is a much less bloated code base in comparison to something like Java or C++.
Nltk was never intended to be a way to do production-grade natural language processing. It's primary objective has been to teach users natural language processing with clear, well-commented code and documentation. If this isn't your situation, please use something else.