Hacker News new | past | comments | ask | show | jobs | submit login
Run Llama2-70B in Web Browser with WebGPU Acceleration (mlc.ai)
9 points by ruihangl on July 24, 2023 | hide | past | favorite | 6 comments



Apache TVM is super cool in theory. Its fast thanks to the autotuning, and it supports tons of backends like Vulkan, Metal, WASM + WebGPU, fpgas, weird mobile accelerators and such. It supports quantization, dynamism and other cool features.

But... It isn't used much outside MLC? And MLC's implementations are basically demos.

I dunno why. AI inference communities are dying for fast multiplatform backends without the fuss of PyTorch.


Checkout the latest docs https://mlc.ai/mlc-llm/docs/ MLC started with demos and it evolved lately, with API integrations, documentations into an inference solution that everyone can reuse for universal deployments


Its been awhile since I looked into this, thanks.

As a random aside, I hope y'all publish a SDXL repo for local (non webgpu) inference. SDXL is too compute heavy to split/offload to cpu like Llama.cpp, but less ram heavy than llms, and I'm thinking it would benefit from TVM's "easy" quantization.

It would be a great backend to hook into the various web UIs, maybe with the secondary model loaded on an IGP.


I don't think TVM advertised a lot on its full capabilities, for example, high-perf codegen for dynamic shapes without auto-tuning, or auto-tuning-based codegen, at least in the past few years, and that might be one of the factors it doesn't got a lot of visibility.


I think this is true of AI compilation in general. Torch MLIR, AITemplate and really everything here fly under the radar:

https://github.com/merrymercy/awesome-tensor-compilers#open-...


Purely running in web browser. Generating 6.2 tok/s on Apple M2 Ultra with 64GB of memory.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: