This is interesting, but if you're working on truly CPU-bound code, is Python likely to be the best choice?
Is multi-interpreter Python 3.8 really going to be more maintainable than just writing some C++ or Java? Is this going to be yet another way we need to understand concurrency in Python?
I don't think it's safe to assume that everyone is working on geenfield projects, where the performance requirements of the project are known at the outset.
Work that increases the upper bound of Python's performance is valuable because there is so much Python code already in existence, and it could be useful to make that code faster, due to shifting requirements for that code, or new knowledge about those requirements.
I think similar reasons justify faster runtimes such as PyPy.
I agree. People who complain about the GIL generally seem like they'd be better served by other languages. It strikes me as a kind of complaint that it's not fair they have to accept the GIL in exchange for Python's other features. You can't always get what you want I guess.
Is multi-interpreter Python 3.8 really going to be more maintainable than just writing some C++ or Java? Is this going to be yet another way we need to understand concurrency in Python?