You are doing a good job balancing the two. For Julia's Flux, they did the opposite and it has severe performance problems compared to PyTorch despite being more usable and easier to install.
Installing PyTorch with Poetry is next to impossible. Flux got this right by bundling the GPU drivers. Their installation is also standardized and does not require the weird pip -f flag for CPU only installations.
It was about being more general- More general compiler, more general code, more composable code.
Then, the team has been optimizing that and including compiler optimizations in the language that benefit all code. ML type code stressed that in a particular way. Pytorch does ML array heavy stuff as a special case.
Julia will be doing the same, but it's setting the groundwork for domain specific optimizations to be done in package and user space. A different sort of philosophy
It was about being more greedy and setting the groundwork for a more powerful tool in general, at some short term cost.
They could have just wrote a framework that baked in fp 32/64,16 with cuda kernels and tracing and operator overloading computational graphs and gotten more speedup over pytorch (in fact, avalon.jl takes that approach.), with better usability.
But they didn't and now there's a burgeoning ecosystem that does things no other framework can't. It's not quite as marginally beneficial for current vanilla ML because that is stuck in a local optimum, but I think that is going to change: https://www.stochasticlifestyle.com/useful-algorithms-that-a...
In the meantime, places like MIT, moderna, NASA etc are reaping the benefits.
Some specific steps that will push it past jax/pytorch for chunky array heavy GPU code (can already beat or meet openblas/MKL for kernels written in scalar form).
7. Changes to array semantics which will include generic immutability/ ownership concepts.
And many more. The key is that all the initial groundwork that traded off fundamental flexibility for specific speed will then feed back into making the ML usecase faster than if it had focused on that initially. People can do all kinds of crazy yet composable things, in pure Julia without modifying the base compiler.
Bonus: Being able to modify the type lattice to track custom program properties. This means that you don't need to be stuck into global tradeoffs with a static type system and can do things like opt in track array shapes at compile time per module: https://twitter.com/KenoFischer/status/1407810981338796035 Other packages like for quantum computing are planning to do their own analyses. It's generic and the usecases and compositions aren't frozen at the outset. (unlike for example, the swift tensors fitting perfectly proposal).
We ship everything needed for userland -- including parts of CUDA/CuBLAS and CuDNN that we need (which is why our binaries are so fat).
GPU drivers would be kernel-land and I don't think we actually can install GPU drivers as part of a `pip install`. Will look into what Flux is doing, but I doubt they ship GPU drivers.
Separately, thanks for flagging the Poetry issue, we might prioritize it, especially if the fix is easy.
yes Flux doesn't ship GPU drivers. It ships everything else (like CUDA toolkit etc) as needed, using the artifact / pkg system, for all mainstream OSes. Doesn't interfere with system libraries.
One question:
One of the advantages about having a clean design is that performance is easier to optimize, since the 80%/20% rule of performance becomes much more obvious. How true was this in your experience? Were there any major performance-related design changes or was performance optimization a matter of tuning a few selected functions?