Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What are the alternatives to Amazon Lambda?
83 points by gamesbrainiac on Oct 30, 2015 | hide | past | web | favorite | 74 comments



In a way, Amazon Lambda is just "upload a PHP script to my $3 shared webhost", except:

   - JavaScript/Java/Python instead of PHP
   - Can handle gigantic bucketloads of requests per second, transparently
If you don't expect to scale super high, and if the actual language isn't sacred to you, then you can get the exact same ease of deployment and a comparably large ecosystem of open source on your favorite el cheapo webhost.

Many of the reasons Lambda is attractive are the same reasons PHP became so popular (despite the not-so-awesome language).


This is where PHP can really shine.

I mean, you simply upload a script to a web server and have a new API endpoint...


You realize that Python, Perl (cgi), and even C, can do the same things? mod_xxxx


In practice, you can't: stuff like mod_python was never popular and never really gained a foothold in the shared-hosting market.

Deploying a python script today means either building a VPS or hunting for a niche provider like webfaction (which will still need you to drop to the shell in most cases anyway). It's another level of skill required.


No, ever used them.

I read that PHP was created to be a better Pearl for web-applications, so I guessed that with Pearl it did work the same.

But I had the impression that languages like Python and C were used to directly develop application servers.


One thing that strikes me as having really promising possibilities for doing something Lambda-esque is systemd socket activation [1]. You can essentially have no running services, and systemd will hold open a socket (port/unix/etc) and then it will spin up a given service if a request is made on that socket.

As long as you build your "service" to handle a single request and shut down (maybe have an external db-access service to avoid add'l connection overhead), I'd imagine you get something very similar to what Lambda offers - high density, on-demand computing.

[1] http://0pointer.de/blog/projects/socket-activated-containers... and http://0pointer.de/blog/projects/socket-activation.html


As others have said, this is basically what inetd and the likes have been doing for a long time. The reason why it isn't used for heavy services anymore is that every request will spawn a new process, and that's extremely expensive. That's like CGI at the socket level. The natural "solution" you have is to let your process run, and every time a new request comes from the network, it is merely redirected to your process, possibly under another format; that's FastCGI for you. But then you lose the simplicity of programming your process as if it were called for every request.

The real innovation that Lambda brings on the table is container reuse [1]. Basically your process is started in a container the very first time a request hits Lambda, and the initialization step happens; when it's done the container is "snapshotted" and runs the request. If the request has been completely processed, the process should normally close, but instead the container is kept hot for the next requests. If they come not too far away, the container will be reused, right after the snapshot, as if it was run for the first time.

Because of this trickery having many requests is not so expensive anymore, there can be very little latency. Bring that to everyone and inetd/xinetd/systemd's socket activation can be seriously considered as a viable solution.

[1] https://aws.amazon.com/blogs/compute/container-reuse-in-lamb...


Seems like you could essentially implement an LRU container/process/etc. keepalive on top of <insert socket activation tech> and have your machines essentially fully-utilized with minimal latency for services being accessed recently.


No, what Lambda does is take your equivalent of CGI script, init it once and reuse the snapshot afterwards. It is completely oblivious to the process. As far as it is concerned, your script is just "take this from STDIN, do the magic, write on STDOUT and exit". There is no notion of keepalive because the script itself isn't supposed to stay alive. All the magic happens in the execution of the script.



If you go to the second link I provided, Poettering explicitly mentions inetd. He also elaborates on what makes socket activation different.


Welcome to inetd/xinetd. They've been around for a long time :)


Along these lines I started a pet project [1] to enable reusing artifacts (Java) uploaded to AWS Lambda to also be self-hosted. One of the last tasks I have left is to dynamically run a container based on the incoming request similarly outlined here [2].

[1] https://github.com/digitalsanctum/lambda [2] https://developer.atlassian.com/blog/2015/03/docker-systemd-...


Your github link is 404


I realized I hadn't yet made the repo public. It is now :)


Check for keys!!! :-)


Iron Workers: http://www.iron.io/worker/

Most new breed Docker hosts like http://tutum.co can give you a quick DIY version.

But in general they (Amazon) are (currently) the best of breed and I have only positive things to say having used the JS and Java versions.


We're using IronWorker and Lambda for a current project. Our (Go) code can run on either and we've designed the app to easily swap out the backend in case of service failure.

Pros of IronWorker: More language support, better logging, easier to debug, scheduled tasks via API (this is the main thing stopping us going all in on Lambda).

Cons: Slower start up time (~2-3s on average compared to milliseconds with Lambda), pay upfront rather than pay for what you use, no choice re. CPU/RAM.


Thanks for the synopsis. Lots of plans for IronWorker including fully custom Docker image support (soon) and millisecond response times (soon'ish). We also support custom configurations of CPU/RAM for dedicated customers which is what most of our customers with stricter SLA's utilize.

That said, a big differentiator is IronWorker's ability to be deployed to any cloud provider as well as private clouds / on-premises. Many of our enterprise customers need hybrid deployments.

We're always listening so please feel free to shoot us feedback at support[at]iron.io or find us at @getiron.

Chad @ Iron.io


You may be interested to know that Lambda now supports scheduling: https://aws.amazon.com/blogs/aws/aws-lambda-update-python-vp...

Edit: Oh, are you saying that you want to send an API call to schedule something? They say in the post that API support is coming, but I don't know if you could schedule a one-off run that way.


Yeah, we connect with other services so we need the ability to programmatically re-schedule a function to run in ~5 minutes if it failed the first time.


Have you noticed any downsides to using Lambda over running a persistent Node server? Is there overhead in execution time, for example?


There's a warmup time when a function is first executed in a container that can be up to 2 seconds depending on what libraries you include (you pay for this time [1]). After the first execution, your function environment will stay "warm" for 10-15 minutes and it will only take 10-150ms to execute your function on a new event.

The other big downside is every time your function goes "cold" you pay the cost of all the extra (anything not in standard lib, ImageMagick, or the AWS Nodejs SDK) libraries you need.

The upside is that you only pay for the time your code actually runs. I've replaced a cronjob server and saved ~90% on the bill to run my jobs. Mostly they were scheduled backups and other misc integrity checks on AWS and 3rd-party infra, so nothing that intense.

More limits/downsides: http://serverlesscode.com/post/aws-lambda-limitations/

1: the cost for 2 seconds of execution time on a 512-MB execution environment is 0.00001668 USD, so it's not likely to break the bank. If you have a high-traffic function it's likely to stay "warm" pretty much all the time. And if your function is low-traffic, it's likely you fit inside the free tier.


Ha! Cronjob to keep functions from going cold! If everyone starts doing that, it will reck the cost structure of Lambda for Amazon, I wager. One wonders why they didn't consider the possibility of users doing just that.


I don't think the previous poster was talking about using cron jobs to prevent functions from going cold.


The node version was OLD last I checked; no harmony features.


We've been hacking on something in this space for the past few months with StackHut - https://www.stackhut.com. We have a demo in a blog post at http://blog.stackhut.com/phantomjs-cloud/ showing headless web rendering-as-a-service.

We've tried to make it as simple as possible to get your code into the cloud. You write your classes and they're deployed as entrypoints into a lambda-esque service that you can call over JSON-RPC - making it trivial to write and deploy a backend.

As described in the blog, the great thing we think it has over tools like lambda is that you can describe your system declaratively, specifying any dependencies you may need, using a simple YAML file and it takes care of packaging your code. You can run this locally or deploy it to our hosted platform where it scales as needed.

Currently support JS and Python atm, and open-source at https://www.github.com/StackHut. We're super early but would love to hear your thoughts.


I've been using StackHut loads recently. Though it's more of a replacement for using AWS Lambda and AWS Gateway together.

There's a CLI tool and you can deploy literally any function you've declared locally which StackHut then automagically available through HTTP, no web server routing or request handling required (on server side at least).

It's a super fun idea and it's surprisingly easy to get started with. If you've got some free time this weekend, totally check it out.


What about Azure Service Fabric [1]? It seems a bit different that it doesn't even require you to have S3* /database storage. You have access to in-memory distributed and resilient data structures [2] right in your code.

They seem to provide Chaos and Failover tests (à la Netflix Chaos monkey but out of the box.)

[1] https://azure.microsoft.com/en-us/campaigns/service-fabric/

[2] https://azure.microsoft.com/en-us/documentation/articles/ser...

*or Azure S3's equivalent


Google App Engine includes pretty much the same thing, except it's more mature since it's been around for ages. (JS isn't among its supported languages though — Go, Java, Python, PHP — but IIRC they may have recently added some kind of language-agnostic runtime.)


To me AWS Lambda is like a Brazil-like nightmare in bureaucracy (http://www.imdb.com/title/tt0088846/), while App Engine is equally powerful but a lot simpler to use.

It's harder to make something simple and powerful than it is to make something complex and powerful, so hats off to the Google engineers involved.


It took Google a long time to get there - I remember checking out AppEngine in 2007 and wondering why anyone would ever use it instead of EC2, and then checking it out in 2009 and wondering how the hell to put together a working app, and then checking it out in 2010 and giving up in frustration, and then finally checking it out in 2012 and finding it was actually pretty useful for prototypes.


The biggest disadvantage with Google from a global perspective: They don't have a chinese-hosted edition, unlike Amazon/Microsoft/IBM.

I understand and can identify with the why...but still, most global businesses want to target China.


http://hook.io/ lets you link a gist script to a webhook, very similar. Open source & free.


Hey! Awesome! Just saw a nice bump in signups from this post, thank you!

I'm the sole creator of hook.io. I actually beat Amazon Lamda to market by over a month before they announced...so technically Lamda is a clone of my service! :-)

We are making a bit of revenue now and users really like the service.

We could really use an angel investment to push development forward. I've funded development for the entire project for the past 13 months out of pocket. We have no debt and almost no overhead.

Someone should invest in hook.io!


Oh man, this is exactly what I've been looking for for a long time. What a brilliantly simple implementation!

FYI: it looks like they have a really generous referral program[1], so post your referral link (shameless example: https://hook.io/christiangenco?s) and I'll tweet the developer to see if you can get retroactive credit.

1. https://hook.io/referrals


https://webscript.io is also pretty awesome, if you're willing to learn some Lua.


hook.io supports lua, along with eleven other programming languages.


it's continuously making requests to /totalHits/ if you check the network, is that intentional?


Yes, that is the update counter for the live deployment count on the homepage. It constantly polling our total deployment count and updating the value on the homepage.

I'm a bit of a fan of putting unnecessary stress on our servers in a controlled fashion. This gives me a bit of wiggle room if we need to free up resources in an emergency...which still hasn't happened yet. :-)


https://webtask.io/

Why the don't use Lambda: https://medium.com/aws-activate-startup-blog/sandboxing-code...

... hile we were at it, AWS announced AWS Lambda, which looked like a potential remedy to our challenge. Upon further investigation, however, it became clear AWS Lambda is centered on an asynchronous programming model that did not quite meet our needs. The custom logic we run in Auth0 is an RPC logic: it accepts inputs and must return outputs at low latency. Given the asynchronous model of AWS Lambda, implementing such logic would require polling for the results of the computation, which would adversely affect latency...


(Full disclosure: I work with Domino.)

For some use cases, Domino's API endpoints[0] feature is an alternative, especially if you want to run R or Python scripts.

[0] https://www.dominodatalab.com/benefits/web_services


Isn't most of the draw of Amazon Lambda that you can hook everything else into a simple api hosting service? I doubt there are any direct competitors, at least at that scale.

Here is a discussion:

https://www.quora.com/Are-there-any-alternatives-to-Amazon-L...


+1 one of the draws for Lambda is that it has easy integration with other AWS tools (Kinesis, DynamoDB, SNS, even CloudFormation) and is really low-cost since you only pay for execution time.


And Amazon API Gateway, potentially the request routing layer for a heterogenous set of Lamdba-backed endpoints. Seems like that may have been particularly a service jenkstom was referring to. https://aws.amazon.com/api-gateway/


https://TaskMill.io is in early open preview; free for all

> TaskMill's extreme microservices OSS platform. Check-in your scripts to GitHub and we turn each into an HTTP endpoint. See how we are building a Docker-powered Open Cloud where we build on top of each other's scripts to Automate All The Things!

disclaimer; I am the founder of TaskMill


HN Hug of death? The website is down for me. I look forward to playing around with it.


Seems to be working fine


was increasing capacity; enjoy and feedback welcome


I started trying to use Lambda and found a few immediate issues.

1) It does not work with anything inside VPC. EG: RDS Databases cannot be used. This severely limits why you'd use this on AWS for larger apps.

2) Trying to get it working for a real app is a bit of a gun show. Debugging was a pain, you cant really write any unit tests easily that I found. Local development was confusing.

3) Lambda forces you to use old technology. Node .10x, Python 2x.

4) You're using code that will only ever work on AWS. If you don't like vendor lock-in then look elsewhere.

If you don't need VPC access take a look at Heroku and background workers. They scale independently from the front-end main app. No servers to manage, just git push heroku master. You can write them in pretty much any language you want. Heroku supports the latest stacks and languages.


VPC support was introduced some weeks ago https://aws.amazon.com/it/blogs/aws/aws-lambda-update-python....


Not quite, they announced it would exist: "This feature will be available later this year." (from the post).

But when it is available, it'll be a huge step ahead.


Excellent thanks, I'm happy they have included this now.


RDS could be used. But not without weaken the security.


We have been working on something like this as well using docker containers. We call it Bakula its still in an early alpha phase.

The idea is that instead of writing to a particular API it listens for events and mounts a filesystem. Then you just process files in the filesystem as your events and delete them when they are done.

The code can be found on github at: https://github.com/immuta/bakula


Webscript (https://www.webscript.io/) but it's Lua only.


I tried that once. If you are looking to invoke HTTP web services or SMTP mail services from the web then it is good. ie., as online services not stored on your system.


One cool use that we are using Lambda for is to destroy EBS snapshots past a certain date. We tag snapshots with a 'retention_days' tag and then a Lambda process kicks off daily and deletes the old snapshots. The great thing about this is we can lock down the DeleteSnapshot api call so that none of our instances have access to that call anymore.


Depending on your use case and how much startup time you can tolerate...

We need the ability to run 1,000s to 100,000s of ~3 second jobs as fast as possible, but only sporadically (~100 times a day with our current customer base). At that "low" frequency, can't justify the cost of keeping that many dedicated servers up 24/7 to meet this intermittent need, and we need sub-second startup times so we can't launch normal VMs on demand.

However, the cost of servers on Google Compute Engine (GCE) is actually cheaper than lambda if you have big batches of jobs. The VMs could be launched via their autoscaler or Dataflow, and jobs could be dispatched via Pub/Sub. If you can secure enough preemptible GCE instances (which are of limited availability), these are even cheaper. GCE VMs launch in about a minute, compared to AWS's 3-7 minutes, so on-demand launching is actually doable for some use cases as long as you don't mind paying the 10 minute minimum. This also gives you access to huge resources (CPU and RAM) if needed.

Specifically, ~1.1 million 3300 ms 1536 mb lambda jobs cost us $90.04, and GCE would be $50 on regular or $15 on preemptible GCE VMs for the same number of jobs in the /same total execution time/. After requesting a Lambda quota of 100,000 concurrent jobs, we were only allowed 1,000 ("for now", they say) -- thus it takes an hour for that many jobs to complete, so GCE$ = (1 hr) * (cost per vCPU/hr) * (1000 vCPUs). Note that we're using the max allowable memory on Lambda because that ~linearly decreased our execution time, but we use an average of 400 mb. My estimate also doesn't take into account the overhead time that comes from launching lambda jobs that might be reduced via the home-brew solution I described above.

Someone else mentioned the issue with Lambda only having 0.10 vintage node. We have a somewhat complicated albeit elegant solution for communicating with a portable node 4.x binary via IPC. If AWS ever gets to allowing specified node versions, that would reduce our runtime marginally.

(BTW, I put in a feature request to GCE for a lambda-like service here: https://code.google.com/p/google-compute-engine/issues/detai...)


We used to use Joyent Manta for the same purpose we are now using Lambda. Joyent worked well, but Lambda works significantly better (speed-wise) and significantly cheaper.


Newbie question: what does "Amazon Lambda" have to do with functional programming? (Because the name suggests there is a connection).


"With Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. "

I.e. wrap your code as if it were a lambda function, and pass it to us. It will work when we call it just as well as when you do in the original environment.

Sounds like a better name would be "Amazon Macro", though. This is because the "lambda" (user's application) likely refers to features of the hosting environment and is influenced by them; it is not referentially transparent to place code on a server, even if the grunt work of setting up the surroundings is done by a "macro".

"Lambda" is a hot buzzword though, as some notable Blub languages have been scrambling to integrate them, whereas "macro" has a stained reputation.


There are several connections between Lambda and functional programming.

1. The snippets of code you run in Lambda are called "cloud functions".

2. Lambda applications are generally event driven and stateless, which is a very functional idiom.

As other folks mentioned, however, you can use Lambda with non functional languages (like Java). So even though a Lambda application is inherently functional (small stateless functions communicating via events), you don't have to write in a functional style if you don't want to.


Actually not much. Its called lambdo because you can deploy functions to the cloud, but those functions can be written in a non functional style


That's the case for most uses of "lambda" in programming languages too. It just means something like "anonymous function". I believe Lisp was the first language to introduce lambda as a keyword, and that's how it uses it: (lambda (x) ...) creates an anonymous function, but the body of the function doesn't have to be written in a functional style. In fact the modern concept of "functional style" didn't exist at the time.


Tjo-fa-de-rittan lambo?


Inspired by Amazon Lambda I started working a few days ago in a similar project: Nervio [0]

It is currently a very early alpha. Today I have planned to hide all NATS references to have a very simple and agnostic API for components.

[0] https://github.com/nervio/nervio/



Thanks for mentioning! Working on getting an OS X packaged app out this weekend along with that Linux/Bash-as-a-service up!


APITools might help (depending on your use case). Haven't used them myself but they gave a pretty good talk a couple months back.

https://www.apitools.com


Anyone here have experience building workflows with SWF on top of lambda?


Azure WebJobs


Why do you want an alternative? Not sure there are any as far as price goes.


If you are looking to host a "PHP script" free, you should consider 000webhost. OR any shared host, the host you are using now.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: