Yes these are good points and probably the most important ones as far as the maths is concerned, though I would say regularisations methods are really standard things one learns in any ML / stat course.
Ledoit, Wolf shrinkage is indeed more exotic and very useful.
No, sorry but that’s a very dumb way to think about it.
If someone can get something done in half the time it takes someone else, and is slacking off the rest of the time, is he stealing from the company ?
Obviously not. And would his total output be significantly higher if he tried to work continuously ? Not necessarily.
Different people have different productivity patterns.
The point is that trying to impose a rhythm or longer hours onto someone does not necessarily improve their output, no matter how much you try to force them.
I think type hints have mostly changed Python for the better but I still get frustrated by the number of half baked features and inconsistencies in the language.
You end up fighting quirks ( like isinstance not working properly with generics ) all the time and it can get pretty tedious.
I’ve been disappointed with Jax which I was trying to use for backward auto differentiation.
The issue is that XLA JIT compilation is very slow and easily adds half a minute of overhead to the first call of the base function just by using jax.numpy instead of numpy, which made it a non starter for my use case. It’s definitely optimised for large flow computations where the JIT overhead is dwarfed by the rest.
In the end I reverted to using autograd which did the job fine.
I had never heard of tai chi until now, I’m curious how it compares.