My own experience basically agrees: I’ve read and enjoy Skiena, it’s written in clear style and it’s the “cover to cover” text for a working developer or for interview practice. But I also have TAOCP and CLRS on my shelf, and I haven’t read either of them. I’ve certainly used both of them a lot, but I simply haven’t read all of them because I use them more as reference texts.
Personally I find it bothersome that these textbooks are written with idiosyncratic pseudocode. In my opinion, many of these authors lose their grasp of common implementation difficulties if they don’t provide students with working code that will compile and run. It’s easy to throw down the theory and some okay-ish pseudocode while in effect saying, “...and the rest is just a minor implementation detail, which I leave as an exercise to the reader...”
A while back, I played around with Earley parsing. The Earley parsing algorithm, the basic one, is pretty simple. In fact, it's almost recursive descent, if you didn't use recursion, but managed the current state of the parse manually.
Unfortunately, the original algorithm was described as dynamic programming, which was the hot new thing at the time. Then, there's a bug in it, it's hard to recover a parse tree from it, and it can be extended to be efficient on left-recursive (?) grammars.
Translating the algorithm out of the dynamic programming world isn't too hard. There's a reasonable description of fixing the bug. But recovering a parse tree (technically, a "Shared Packed Parse Forest", since Earley is context-free and to avoid exponential blow-up when producing multiple parse trees, the trees need to share structure) killed me. That paper's pseudocode was the worse spaghetti I've seen in a long time.
I think I finally managed to get the SPPF to get the right structure now, but it is way harder to penetrate compared to the complexity of the underlying idea.
Now I only have to figure out how to actually iterate over the different parse trees..
I don’t feel that’s really the case with TAOCP. It’s machine language for a fictional machine, but there exist emulators for the machine. The code is runnable.
I'm curious if anyone else experienced this or could point out where I went wrong. I really wanted to do the class and I liked the lectures, but i started it after finishing Gardner's Stanford algorithms Coursera class which had some of my favorite exercises ever. They had you write an algorithm in it's entirety, gave you sample data sets to check against, and let you write it in any language. Compared with that, Sedgwick's assignments were like coding with one hand tied behind my back.
If I may ask, what is your line of work? If it requires reading heavy algorithms stuff, it is probably an interesting job!