Hacker News new | past | comments | ask | show | jobs | submit login
AWS AI Stack – Ready-to-Deploy Serverless AI App on AWS and Bedrock (github.com/serverless)
43 points by fitzgera1d 71 days ago | hide | past | favorite | 19 comments



Increasingly bullish on AWS Bedrock.

• Devs forever want choice.

• Open-source LLMs are getting better

• Anthropic ships fantastic models

• Doesn't expose your app’s data to multiple companies

• Consolidated security, billing, config in AWS

• Power of AWS ecosystem


I am worried about AWS imposing their own political rules on the models. For example, they may impose censorship, or safety requirements. It is hard for me to trust them as a central platform in this ecosystem


+1 Bedrock supports custom model import, though I haven't used it and can't speak to limitations there. Also, this boilerplate provides a solid foundation for any LLM app, whether you use Bedrock or opt for models hosted elsewhere.


You can check out this technical deep dive on Serverless GPUs offerings/Pay-as-you-go way. This includes benchmarks around cold-starts, performance consistency, scalability, and cost-effectiveness for models like Llama2 7Bn & Stable Diffusion across different providers -https://www.inferless.com/learn/the-state-of-serverless-gpus... .Can save months of your time. Do give it a read.

P.S: I am from Inferless


Last time I checked Bedrock was quite expensive to operate in a small scale.


I'm confused, what's expensive about it? It's a serverless pay per token model?

Do you mean specifically the Bedrock Knowledgebase/RAG -- that uses serverless OpenSearch which costs at minimum $200ish/month bc it doesn't scale to zero?


Serverless Bedrock helps. Close enough to per token pricing of others if you need to be on AWS.


I have not read too deeply into this but, do any of these serverless environments offer GPUs? I'm sure there are ... reasons but the lack of GPU support in Lambda and Fargate remains a major paint point for AWS users.

It's been keeping me wrangling EC2 instances for ML teams but I do wonder how much longer that will last.


The major clouds don't support serverless GPU because the architecture is fundamentally different from running CPU workloads. For Lambda specifically, there's no way of running multiple customer workloads on a single GPU with Firecracker.

A more general issue is that the workloads that tend to run on GPU are much bigger than a standard Lambda-sized workload (think a 20Gi image with a smorgasbord of ML libraries). I've spent time working around this problem and wrote a bit about it here: https://www.beam.cloud/blog/serverless-platform-guide


> there's no way of running multiple customer workloads on a single GPU with Firecracker.

You can do this with SR-IOV enabled hardware.

https://docs.nvidia.com/networking/display/mlnxofedv581011/s...


The only big one I know of is Cloud Run on GCP.

https://cloud.google.com/run/docs/configuring/services/gpu


This sounds very compelling. Thanks!


I know for sure this has been on AWS's road map for multiple years now. RE:invent is near. Let's see if they can ship..


The big guys are lagging a bit, but there are many smaller parties offering serverless GPU.

I've been a quite satisfied customer of Runpod's serverless GPU offering, running a side project that uses computer vision to detect toxic clouds in webcam feeds of an industrial site.

If you want generative AI, try Replicate, as they have offer a more specialized product.


They use GPUs under the hood for inference/fine-tuning and charge by token. Fireworks will even let you deploy a Lora serverless at the same pricing as base model.

But not aware of any “lambda”-like serverless for any old CUDA workload. Given loading times, it wouldn’t really make sense. Something like CloudRun or KNative for GPUs would be cool.


Introducing the AWS AI Stack

A serverless boilerplate for AI apps on trusted AWS infra.

• Full-Stack w/ Chat UI + Streaming

• Multiple LLM Models + Data Privacy

• 100% Serverless

• API + Event Architecture

• Auth, Multi-Env, GitHub Actions & more!

Github: https://github.com/serverless/aws-ai-stack

Demo: https://awsaistack.com


I don’t get it. How many people need to deploy their own custom AI chat apps over standard models?


This is meant to be a boilerplate or an example of how to build a Serverless AI app on AWS. You can clone the repo and customize it however you like.


Then need to go one step further and do what Replit did - AI Engineer generates code that gets deployed to this AWS AI Stack.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: