I am worried about AWS imposing their own political rules on the models. For example, they may impose censorship, or safety requirements. It is hard for me to trust them as a central platform in this ecosystem
+1 Bedrock supports custom model import, though I haven't used it and can't speak to limitations there. Also, this boilerplate provides a solid foundation for any LLM app, whether you use Bedrock or opt for models hosted elsewhere.
You can check out this technical deep dive on Serverless GPUs offerings/Pay-as-you-go way.
This includes benchmarks around cold-starts, performance consistency, scalability, and cost-effectiveness for models like Llama2 7Bn & Stable Diffusion across different providers -https://www.inferless.com/learn/the-state-of-serverless-gpus... .Can save months of your time. Do give it a read.
I'm confused, what's expensive about it? It's a serverless pay per token model?
Do you mean specifically the Bedrock Knowledgebase/RAG -- that uses serverless OpenSearch which costs at minimum $200ish/month bc it doesn't scale to zero?
I have not read too deeply into this but, do any of these serverless environments offer GPUs? I'm sure there are ... reasons but the lack of GPU support in Lambda and Fargate remains a major paint point for AWS users.
It's been keeping me wrangling EC2 instances for ML teams but I do wonder how much longer that will last.
The major clouds don't support serverless GPU because the architecture is fundamentally different from running CPU workloads. For Lambda specifically, there's no way of running multiple customer workloads on a single GPU with Firecracker.
A more general issue is that the workloads that tend to run on GPU are much bigger than a standard Lambda-sized workload (think a 20Gi image with a smorgasbord of ML libraries). I've spent time working around this problem and wrote a bit about it here: https://www.beam.cloud/blog/serverless-platform-guide
The big guys are lagging a bit, but there are many smaller parties offering serverless GPU.
I've been a quite satisfied customer of Runpod's serverless GPU offering, running a side project that uses computer vision to detect toxic clouds in webcam feeds of an industrial site.
If you want generative AI, try Replicate, as they have offer a more specialized product.
They use GPUs under the hood for inference/fine-tuning and charge by token. Fireworks will even let you deploy a Lora serverless at the same pricing as base model.
But not aware of any “lambda”-like serverless for any old CUDA workload. Given loading times, it wouldn’t really make sense. Something like CloudRun or KNative for GPUs would be cool.
• Devs forever want choice.
• Open-source LLMs are getting better
• Anthropic ships fantastic models
• Doesn't expose your app’s data to multiple companies
• Consolidated security, billing, config in AWS
• Power of AWS ecosystem