Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I find serverless to be needlessly complex. I'd rather write an HTTP server and serve it off of t3.micro instance (also free-tier eligible). So much simpler for side projects.



I find "serverless" is indeed more complex, because it's a higher abstraction layer. Often, I see people deploying containers lambdas or pods that are full unix environments, with permissions, network routing, filesystems etc. And then because it's "serverless" they use permissions (IAM), networking stuff (VPC, etc), filesystems (S3 etc), and other capabilities that they already have in a lower abstraction level (unix) and are sort of also using. So the complexity of a unix server is a unix server, but the complexity of "serverless" is a unix server plus all the capabilities you duplicated at a higher abstraction level.

Many other commenters replying to https://news.ycombinator.com/item?id=36693471 are interpreting "complex" as "hard for me to set up." I think that's neither here nor there -- no matter what's underneath, you can always rig something to deploy it with the press of a button. The question is: how many layers of stuff did you just deploy? How big of a can of worms did you just dump on future maintainers?


Serverless is too broad a category to say things like "it's too complex". For example, if you already know docker, you can use google cloud run and just deploy the container to it. You then just say "I want to allow this many simultaneous connections, a minimum of N instances, a maximum of M instances, and each instance should have X vcpus and Y gb of ram".


When starting this project I thought the same thing, but having done it I honestly cannot tell that much of a difference. Yes, there are two more steps in setting up the Lambda function, but in the end you still write an HTTP server and have them serve it.


Using a decent IaC framework such as Serverless Framework or the CDK instead of the AWS CLI would make the deployment pretty easy.


I also found while writing the article but after I had already done my research that cargo-lambda has grown some additional functionality that could have removed the need for the AWS CLI, but I wanted to get the article out, so I didn't test-drive that.


When using an EC2 instance, testing, deployment, and adding new endpoints are all simpler.


Easier for you* I've done both for years now and I find developing, deploying, testing lambdas much simpler.


I agree on testing and dev, but for deployment I think stuff like elastic beanstalk or app engine strike a good balance. Almost never run pure EC2.


“Serverless” often has some upfront complexity but I greatly prefer it because once I have it running I’ve never had scaling issues or even had to think about them. To each their own and I’m sure that serverless isn’t the answer for everyone but for my projects (which are very bursty, with long periods of inactivity) is a dream.


It's a bit easier in Python if you use tools like https://www.serverless.com/. I'm not sure if Rust has something similar yet.


At the cost of being very specific to Rust, Shuttle is pretty damn simple. https://www.shuttle.rs/


It's kind of unclear to me, can I use shuttle without using shuttle.rs (the platform) to actually run it?

Not that I am against paying for a service, but the idea of writing my app against with a specific library against a specific platform makes me uneasy.

They have a github project but I think that is just the CLI + rust libs?


From what I've read you can, but I haven't tried myself or looked into it too deeply.


First login failed with: "Callback handler failed. CAUSE: Missing state cookie from login request (check login URL, callback URL and cookie config)." but after retrying it went to Projects list. API Key copy button doesn't do anything.


Yeah it seems the premise of serverless is your code always restarts, which is exactly the same as cloud. The only difference is in front of trillion explosive gotchas in the giant 200GB free middleware called GNU/Linux are their employment in case with the serverless vs yours with the cloud.

UNIX is close to turning 50, and people are fundamentally paying as well as getting paid to make a written program loop to the beginning, instead of exiting. I think this is kind of wrong.


It depends what you’re doing. I’ve run many side projects off a single Lambda function with the “public URL” config enabled. I pay $0 because of the free tier and updating the code is as simple as pushing a ZIP file. No SSH, no OS updates, nothing else to worry about. You start to get into trouble when you try to break your app into tons of microservices without using some kind of framework or deployment tooling to keep it straight.


What about serverless do you find to be “needlessly complex”?


There are just too many required parameters to create a single handler. And then you need to do that N times for each handler. Take a look at a complete Terraform example for a lambda: https://github.com/terraform-aws-modules/terraform-aws-lambd...

For a personal project it's just a bit much in my experience, especially since most personal projects can easily be served by a t3.micro.


Thanks for clarifying. That’s a fair critique.


to be fair it is (mostly) a Rube Goldberg machine designed to keep backend engineers employed.





Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: