Apache TVM is super cool in theory. Its fast thanks to the autotuning, and it supports tons of backends like Vulkan, Metal, WASM + WebGPU, fpgas, weird mobile accelerators and such. It supports quantization, dynamism and other cool features.
But... It isn't used much outside MLC? And MLC's implementations are basically demos.
I dunno why. AI inference communities are dying for fast multiplatform backends without the fuss of PyTorch.
Checkout the latest docs https://mlc.ai/mlc-llm/docs/ MLC started with demos and it evolved lately, with API integrations, documentations into an inference solution that everyone can reuse for universal deployments
As a random aside, I hope y'all publish a SDXL repo for local (non webgpu) inference. SDXL is too compute heavy to split/offload to cpu like Llama.cpp, but less ram heavy than llms, and I'm thinking it would benefit from TVM's "easy" quantization.
It would be a great backend to hook into the various web UIs, maybe with the secondary model loaded on an IGP.
I don't think TVM advertised a lot on its full capabilities, for example, high-perf codegen for dynamic shapes without auto-tuning, or auto-tuning-based codegen, at least in the past few years, and that might be one of the factors it doesn't got a lot of visibility.
But... It isn't used much outside MLC? And MLC's implementations are basically demos.
I dunno why. AI inference communities are dying for fast multiplatform backends without the fuss of PyTorch.