Hacker News new | past | comments | ask | show | jobs | submit login

The major clouds don't support serverless GPU because the architecture is fundamentally different from running CPU workloads. For Lambda specifically, there's no way of running multiple customer workloads on a single GPU with Firecracker.

A more general issue is that the workloads that tend to run on GPU are much bigger than a standard Lambda-sized workload (think a 20Gi image with a smorgasbord of ML libraries). I've spent time working around this problem and wrote a bit about it here: https://www.beam.cloud/blog/serverless-platform-guide




> there's no way of running multiple customer workloads on a single GPU with Firecracker.

You can do this with SR-IOV enabled hardware.

https://docs.nvidia.com/networking/display/mlnxofedv581011/s...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: