Hacker News new | past | comments | ask | show | jobs | submit login
ServerlessLLM: Low-Latency Serverless Inference for Large Language Models (arxiv.org)
2 points by geuds 22 days ago | hide | past | favorite



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: