Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One year later and there is still no inference engine for diffusion LLMs

Students looking for a project to break into AI - please!



Actually NVIDIA made one earlier this year, check out their Fast-dLLM paper


Thanks I’ll check it out!


Did I miss something? https://github.com/NVlabs/Fast-dLLM/blob/main/llada/chat.py

That’s inference code, but where is the high perf web server?


training inspired on nanochat for diffusion models: https://github.com/ZHZisZZ/dllm

now someone needs to make it work with vllm or something




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: