Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Servers.lol – Should Your EC2 Be a Lambda? (servers.lol)
150 points by adjohn on Dec 14, 2017 | hide | past | favorite | 96 comments



A few weeks ago I created a small framework called lambdaphp[1]. The site https://www.lambdaphp.host/ is hosted on AWS lambda only!

My aim was to host a Wordpress or Laravel site on AWS lambda without paying any monthly hosting charges. I got everything running on it (sessions, fs, request, etc).

Some things are different though, like when you write to a file using file_put_contents, etc it writes to a S3 bucket instead of fs (thanks to PHP's S3 wrapper), or when you create a session, it uses AWS DynamoDB to do that behind the scenes.

I've created some examples like this sign-in example which uses AWS Cognito + AWS lambda to create a login/signup page[2], file uploader, etc[3].

Of course my project was just for my own amusement but I think this can be good for side project where you just want to host something and forget about it.

[1] https://github.com/san-kumar/lambdaphp [2] https://www.lambdaphp.host/examples/auth/ [3] https://github.com/san-kumar/lambdaphp#examples


This is a very neat project. For those folks where this doesn't quite fit and you want the serverless experience checkout OpenFaaS and our PHP template too - https://github.com/openfaas/faas https://github.com/openfaas/faas-cli#templates we use composer for installing components at build time with Docker (not on your host system directly)


Thank you. Also just want to mention that as per AWS's recent announcement they are going to make Amazon RDS lambda like, i.e. You dont have to pay monthly for your Mysql server but query wise or Cpu consumption wise.


That sounds interesting - do you have a link?


https://aws.amazon.com/blogs/aws/in-the-works-amazon-aurora-...

Only in preview, only in us-east-1, Aurora mysql flavor. Other regions/flavor to follow.


Can you deploy a functional Wordpress blog on Lambda? Would love to try it.


I certainly hope so though I haven't tried it myself. I've managed to get the sessions, routing, fs, etc working just fine without any additional configuration, but of course there will be some quirks during the actual Wordpress installation which I haven't explored yet. Maybe next week I will try it :)


WordPress depends heavily on a relational database, so hosting just on lambda would be hard.


Found this AWS migration tool for Wordpress. Uses Lambda along with S3.

https://github.com/mscifo/pressless


Check this out: https://aws.amazon.com/lambda/pricing/

> The Lambda free tier includes 1M free requests per month and 400,000 GB-seconds of compute time per month.

> The Lambda free tier does not automatically expire at the end of your 12 month AWS Free Tier term, but is available to both existing and new AWS customers indefinitely.

(emphasis added)

That is awesome. Providing you could get your application compatible with lambda. This application may help for WSGI-based (python) applications: https://github.com/Miserlou/Zappa

More on AWS Lambda: https://aws.amazon.com/lambda/


Zappa isn't just for WSGI, you can also build massively parallel event driven applications with async task invocation!

https://github.com/Miserlou/Zappa#asynchronous-task-executio...

:D


This would be way more useful if there was an indefinite free tier (or just a smooth bill per usage!) for API gateway. I have seriously considered moving my personal projects to Azure because Functions can be triggered via HTTPS without a $3.50 charge cliff.


> Providing you could get your application compatible with lambda.

I know Erica at iOpipe and like what they are building. OpenFaaS is an alternative that can run on Docker or Kubernetes to provide a more flexible serverless environment with containers. Check it out https://www.openfaas.com


I'm sure it's quite obvious to those who work intimately with Amazon services, but for the rest of us, and because it's un-googlable, this apparently is referring to a specific service, AWS Lambda. Would be helpful to include any explanation at all on the landing page.


Point well taken, we'll iterate on this and add some background on Lambda.


I watched the overview video below and the tldr for me is that although serverless is doable now and can be done well, the tooling is not there yet, and you're also locked into AWS. So check back in back in 2-3 years.

For me, I think Kubernetes + virtual kubelet (for container instance creation, with per second billing, only on Azure right now) is the safest option. It gives the benefit of easily scalable compute with no need to move to an event driven model or constraints on function duration and resource usage.

https://www.youtube.com/watch?v=1fBbSgJJV_g

https://github.com/virtual-kubelet/virtual-kubelet


You can do the same thing (just run a horizontally scalable container for me with per-second billing) on AWS Fargate: https://aws.amazon.com/fargate/

It will also support Kubernetes pods next year.

Disclaimer: I work for AWS.


It will be great to have Fargate as a provider for the virtual kubelet along with Azure (and I forgot to mention Hyper.sh). Do you know if AWS plan to build a virtual kubelet provider themselves or let the community?


What's the startup time like with fargate? Is it in the region of a few seconds, or is it more like minutes?


I’ve read reports of fargate startup times around 45s. Not fast, but not terribly slow either.


Thanks, that really helps place it in terms of where it might fit with my workflow.


Servers.lol was a side project for us to figure out if some of our own workloads were a good fit. Our day jobs are to build tooling for serverless applications. You can find more at http://iopipe.com


Nice calculator, but it's missing an important field: How many hours per day do you keep your machines running. It assumes you run 24/7.

S3stat's nightly job needs 60 or 70 hours to run, but it all has to happen in the few hours after Amazon finishes delivering logfiles for the day. So I spin up 15 or so machines as spot instances that churn through the queue then shut down.

This calculator tells me that should cost $600 or so per day, when reality it is more like $12. So even though it claims Lambda could do the same thing for $5, it'd take a long time to win back the engineering cost of making the switch. (A little over a month for every hour I spent working on it, it seems).


One of the big value propositions of the cloud was to automatically provision extra servers in response to load - either like you are doing, or by detecting load spikes and spinning up instances.

Trouble is, people don't seem to do that. I see a lot of 'Boss says use the cloud, lets just provision our current network in AWS 24/7'.

I kind of see serverless abd redshift as a workaround for the fact that users are currently mostly doing it wrong, and will eventually figure out that an always on cloud deployment is more expensive than just hiring server or colo.


What it doesn't mention is the high latency of the 99th percentile of requests. If that's something that you don't want, then avoid Lambda, for now.


It’s also worth noting Lambda still lacks a formal SLA.


Given how relaxed their SLAs are (S3 starts at 99% availability, and that's defined only by returned errors, excluding timeouts), does it even matter?


Is the high latency you mention due entirely to cold starting instances, or is there another cause?


We had huge problems with Java and Lambda start up times. I would strongly recommend python/nodejs or maybe Go. For whatever reason .Net core was still slower but not Java holy crap slow.


It seems to be. It can be "cold" when the lambda hasn't run at all in a while and the first instance starts, or "cold" when due to increase in load, or for whatever reason that you're not supposed to know or care about, instances are running but it runs on a new instance.


The thing that makes me nervous about Lambda is the lack of price capping to prevent buggy functions racking up thousands of dollars in a weekend.


I'd add a cloudwatch alarm keeping track of the number of invocations you expect in a given hour. You can also set a concurrency threshold (a relatively new feature)


At Re:Invent last month, AWS introduced per-function concurrency limits that users can set. It can still rack up charges, but it offers a bit more control than before!


That's where you apply some software developer diligence and do proper testing and monitoring. You're saying right now that regular servers / instances allow developers to be lazy because hey, automatic caps.


Everyone makes mistakes. If a dev deploys something to a server with fixed costs that starts hammering CPU at least we won't be hit by literally thousands of dollars in extra charges.


Wonderful website! Really appreciate the detail put into this. And it worked very smoothly on mobile.

Indeed, lambdas are worth considering for some but not all projects, and this tool seems helpful for people who may need a place to start researching their options.


This claims that on close-to-breakeven applications that moving to serverless might reduce dev costs or accelerate dev work... Serverless proponents: how does that work in theory?

In practice, all the serverless guys I've seen have a pre-CI pre-VCS pre-UAT kind of workflow using the AWS console where there's no change control or ability to collaborate.


I've been using Zappa https://github.com/Miserlou/Zappa for a few side projects and even at work, and you can definitely set up a robust version-controlled, CI + UAT + CD pipeline if you have the discipline.

* Dev: develop inside Docker container identical to Lambda environment https://github.com/lambci/docker-lambda

* CI: test via Jenkins/GoCD/Travis/CircleCI, inside docker

* Staging: deploy to staging host on merge

* UAT: against staging env, break build if any user-facing tests fail

* Prod: deploy exact same package we deployed to Staging


CI/CD is pretty easy. Another commenter mentioned Jenkins+Terraform. A year or so ago, I did a great project using just Jenkins+CloudFormation for Lambda deployments, too. Took a bit of initial investment but after that it was maybe the best development experience I've had, despite the tooling being lackluster and best practices still sort of emerging. We used TypeScript to build microservices in the backend with API Gateway talking to a proxy tier, Angular 2 in the frontend. All super enjoyable to use.


Of course it depends on the use case, but for smaller functions it definitely reduces dev work. For AWS, Lambdas can be versioned and aliased. Blue green deployments are trivial since you can switch Lambda versions on the fly. Rolling back and invoking dev versions are similar.


You can now even do canary deployments where you switch a small amount of traffic to a new version of your Lambda. The FaaS (function as a service) ecosystem is rapidly reaching parity with traditional software.

Disclaimer: I work for AWS


Some ideas here re automated testing, CI and deployment:

https://hackernoon.com/yubls-road-to-serverless-part-2-testi...


My team uses the Serverless Framework (serverless.com), git-with-github, and automate deployments from CI/CD.

AWS also offers some of these pieces fully integrated via CodeDeploy (which we do not ourselves use).


We've got a CI pipeline that deploys Lambda functions via Terraform and Jenkins. There are some slightly tricky bits but it's not really that hard to do.


So like, CI uses Terraform populates all of your functions + other resources in a temporary environment and runs whole-system integration tests?


You lost me at "pre-VCS". Seriously.


Yeah... I tend to end up on "We're kind-of like a start-up inside of the Enterprise" teams which frequently fail to see the benefit of change control or release engineering, they're edit-on-production PHP cowboys or Java devs that are used to feeling like operations and integrating their code changes are someone else's problem, but who have no such support.

Teams that have (only fairly recently) discovered AWS lets them escape the snail's pace of getting something into production within Enterprise IT, their little walled garden that IT has siloed them into (for their own good). They just absolutely love Node + Lambda because servers are scary (they tend not to have CLI skills to even CD / LS around the VFS) and it lets them get things done -- which is all their superiors value.

Places where IT doesn't value time to market, and nobody values maintenance or costs (we regularly have products owned by failing to apply patches to products using OSS, we rack up AWS bills by leaving manually created test resources around for months or ramping up products like Lambda to the moon for a batch job then forgetting to turn it back down once it finishes).


It's unclear if the Requests/minute is per instance or total. I tried putting in 500k/minute (total) and got a $40k Lambda bill vs $2k EC2 bill (which is roughly what we pay). If I switch it to 20k/minute (per instance) then it's more-or-less the same price for either.


I get the feeling that lambda is only good for small apps..

Once request volume goes up, instances are much cheaper.

So lambda makes sense for hobby stuff and those small services that you only use occasionally. And then again, the financial motivation probably have more to do with maintenance cost.


It can be good for very bursty things too, particularly in the background. I process 'lots' of json on S3 using it. With no setup or anything complex I can churn through a a TB of json in a few minutes. Not sure on total time, but it's less time than it take me to sync the resulting files to my local machine. Running a thousand things at once for a few minutes is great.


Good news is with Azure Functions you could do consumption mode or pay for the specific hardware you need and run it either way.


It's the total number, sorry for that not being clear. We will make some improvements that may help clear that up.


According to this our app would be $44k in lambda and $2k in EC2.

I suspect we’re paying a great deal more than that for EC2 but I don’t see those bills.


serverless is the shake weight of tech.

in all seriousness a very useful and cool tool. thanks for sharing!

i'd have picked a more informative domain; maybe serverless.info?

nice to know that is currently not cost effective for my workload.

- lambda = $3,327.05

- ec2 = $1,1146.24


Serverless is great for all the tiny little things I want to schedule and forget or setup a trigger for and forget. But my type of usage is not going to pay the bills for serverless.

I just don't get why anyone is using stuff like lambda for big projects unless they have extremely unpredictable and extremely spikey traffic.

Because any kind of sustained usage will quickly be cheaper to put on a nano instance and scale up from there.


The main disadvantage of serverless though is the need to learn an entirely new tech stack, and one that is locked in to AWS. Migrating a large (or medium sized) Django app to Lambda has non-trivial cost in terms of engineering time. Also the Lambda stack is less well known, which may cause some challenges for the engineering team (ramp-up time for new hires, etc.)


Eh - I wouldn't say an entirely new tech stack. The main difference is breaking your components into microservices; after that - you just need a wrapper for your serverless provider of choice. The majority of the logic should be reusable though.


How is a c# lambda an entirely new tech stack if you already use .net core?


This is great, I wanted to make something like this once but never got round to it.

Mainly because the pricing calculator from AWS is pure trash


I'm curious, is there an intersection of people who make significant use of AWS who purport to support the open source movement? If so, how do you reconcile centralizing so much of your business/personal data and technology influence behind such a proprietary platform?


I upload my code to AWS, and it gets run.

You save your code to your Intel laptop, and it gets run.

Neither of us have the complete source code that is used to run that code. You have to be ok with that, and drawing a line in the sand at the hypervisor level just seems super arbitrary.


When you write against lambda and S3 it's not the same as code just being run on a laptop.


i can buy an amd laptop and run my code. where you gonna run your aws architected code if aws changes their policies to something you don't like? hint: it's called vendor lock-in.


Ok, so then I spend 5 minutes changing some boilerplate and upload it to Azure/Google Cloud/a server I bought on craigslist? Even if that did happen, I probably would still have saved time overall, because I didn't have to maintain physical infrastructure between now and whenever that distasteful policy change takes place.


nope. if your code takes 5 minutes to rewrite, then who cares? if it takes longer, then you have a big, expensive, risky problem.


Its not the code that take 5 minutes to rewrite, its the AWS entry point.


I call bullshit. But if you want to do the experiment, on video, I'll watch it.


So you write to an abstraction layer rather than directly to vendor APIs.


You can even get Pharo to run on AWS Lambda - https://gitlab.com/macta/PharoLambda

However what's really interesting is that it serialises any crash to S3 which you can then pull down and run/fix in your local Pharo debugger - https://twitter.com/martinfowler/status/897083875003969536?l...


If Lambda supported Powershell, I'd already have moved hundreds of small Active Directory management scripts there.

And I'd move at least a couple personal projects there, instead of porting them to Node.

I know I'm in the minority, but there are dozens of us.


On AWS Lambda you could copy the Powershell Linux binary into your project and call it from wrapper code in NodeJS, Python, Java, C#, or Go.

Over in Microsoft land, Azure Functions natively supports Powershell according to this blog post: https://david-obrien.net/2016/07/azure-functions-PowerShell/


I see some systems that might be prime for the move, thanks!


Is there any work emerging on a standard environment/framework for serverless code? As it stands it all represents vendor-lock-in due to the comparatively high costs of moving between stacks. An environment that's deployable to any of the serverless providers would be ideal.


I believe serverless works on a bunch of different providers. I have no affiliation with this project but it’s a pretty good tool! https://serverless.com/


this is about as generic as you can get https://github.com/openfaas/faas/blob/master/README.md any process type on a swarm of k8s cluster


> perhaps its time to ponder if a more modern architecture would save you money or positively affect developer experience.

That's condescending and inappropriate (since it knows nothing about "modernity" in architecture), at the same time.


Nice. No feedback. But the share link at the end is just the root domain: https://imgur.com/gallery/FL0xa


I'd venture the "without results" caveat just above that link might be why.


If you click on the "Share results" button it will generate a shareable URL for you.


Perhaps you could factor in something about cold-start?

I know that in my context, this has been a key issue. Another odd one is that our organization has only certified RHEL and Cent OS running server-side code.


What exactly are your issues with cold start? This typically only affects functions that are called a few times a day, and can be solved for those functions by running a CloudWatch event that triggers your function every few minutes or so. You get 1 million free invocations a month, so this typically doesn't increase your bill.

Disclaimer: I work for AWS


We have code that you can run on your AWS account for more accurate information from servers.lol based on Cloudwatch metrics. Latency and coldstart information would be totally interesting, I'll suggest it to our team!


Great points! We'd like to expand on this to go deeper, and these are great suggestions.


I’m not sure what the stated purpose is for the .lol tld but it is difficult to take a service seriously when it uses a domain like servers.lol


Wow, we're paying A LOT more than we need to.


Would Lambda work for On-Prem at all?


Yes, take a look at AWS Greengrass: https://aws.amazon.com/greengrass/

This lets you run the same Lambda execution environment on any device you own. Typically, we see this being used by customers that need to do computing at the edge where Internet connectivity is poor or intermittent (think oil and gas fields, mines, etc.)

Disclaimer: I work for AWS


No, but take a look at OpenFaaS or FaaS-netes, similar concept but running on-premises. Definitely less mature, but they can get you 90% of the way there if you're not doing anything super complicated.


https://github.com/openfaas/faas (Serverless for on-prem, roll your own infra, etc)


What's the benefit or use case of running serverless on-premise? Isn't the whole ideal of serverless is that you do not maintain servers yourself? Instead of specific Lambda/FaaS solution for on-prem, could we not just use docker package the function in startup script to achieve the same?


I don't believe it would, AWS Lambda is all hosted at the moment. We moved some on-prem systems into EC2 and finally to Lambda functions.


We were in the same boat! Gonna do some testing of more EC2 apps into Lambda.


Fun site! It's a bit annoying that you can't edit the input and get a new result though.


Thanks for the feedback! You should be able to edit the inputs by clicking on the left hand side (application name) to change the inputs above.


Nice to see Haskell on the list.


It's in the list but when you click on it, it says "Some of the languages you are using are not yet officially supported in Lambda, but there are still runtime workarounds. Here are some resources:" and links to a blog post about running Haskell on Lambda.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: