- Can handle gigantic bucketloads of requests per second, transparently
Many of the reasons Lambda is attractive are the same reasons PHP became so popular (despite the not-so-awesome language).
I mean, you simply upload a script to a web server and have a new API endpoint...
Deploying a python script today means either building a VPS or hunting for a niche provider like webfaction (which will still need you to drop to the shell in most cases anyway). It's another level of skill required.
I read that PHP was created to be a better Pearl for web-applications, so I guessed that with Pearl it did work the same.
But I had the impression that languages like Python and C were used to directly develop application servers.
As long as you build your "service" to handle a single request and shut down (maybe have an external db-access service to avoid add'l connection overhead), I'd imagine you get something very similar to what Lambda offers - high density, on-demand computing.
 http://0pointer.de/blog/projects/socket-activated-containers... and http://0pointer.de/blog/projects/socket-activation.html
The real innovation that Lambda brings on the table is container reuse . Basically your process is started in a container the very first time a request hits Lambda, and the initialization step happens; when it's done the container is "snapshotted" and runs the request. If the request has been completely processed, the process should normally close, but instead the container is kept hot for the next requests. If they come not too far away, the container will be reused, right after the snapshot, as if it was run for the first time.
Because of this trickery having many requests is not so expensive anymore, there can be very little latency. Bring that to everyone and inetd/xinetd/systemd's socket activation can be seriously considered as a viable solution.
Most new breed Docker hosts like http://tutum.co can give you a quick DIY version.
But in general they (Amazon) are (currently) the best of breed and I have only positive things to say having used the JS and Java versions.
Pros of IronWorker: More language support, better logging, easier to debug, scheduled tasks via API (this is the main thing stopping us going all in on Lambda).
Cons: Slower start up time (~2-3s on average compared to milliseconds with Lambda), pay upfront rather than pay for what you use, no choice re. CPU/RAM.
That said, a big differentiator is IronWorker's ability to be deployed to any cloud provider as well as private clouds / on-premises. Many of our enterprise customers need hybrid deployments.
We're always listening so please feel free to shoot us feedback at support[at]iron.io or find us at @getiron.
Chad @ Iron.io
Edit: Oh, are you saying that you want to send an API call to schedule something? They say in the post that API support is coming, but I don't know if you could schedule a one-off run that way.
The other big downside is every time your function goes "cold" you pay the cost of all the extra (anything not in standard lib, ImageMagick, or the AWS Nodejs SDK) libraries you need.
The upside is that you only pay for the time your code actually runs. I've replaced a cronjob server and saved ~90% on the bill to run my jobs. Mostly they were scheduled backups and other misc integrity checks on AWS and 3rd-party infra, so nothing that intense.
More limits/downsides: http://serverlesscode.com/post/aws-lambda-limitations/
1: the cost for 2 seconds of execution time on a 512-MB execution environment is 0.00001668 USD, so it's not likely to break the bank. If you have a high-traffic function it's likely to stay "warm" pretty much all the time. And if your function is low-traffic, it's likely you fit inside the free tier.
We've tried to make it as simple as possible to get your code into the cloud. You write your classes and they're deployed as entrypoints into a lambda-esque service that you can call over JSON-RPC - making it trivial to write and deploy a backend.
As described in the blog, the great thing we think it has over tools like lambda is that you can describe your system declaratively, specifying any dependencies you may need, using a simple YAML file and it takes care of packaging your code. You can run this locally or deploy it to our hosted platform where it scales as needed.
Currently support JS and Python atm, and open-source at https://www.github.com/StackHut. We're super early but would love to hear your thoughts.
There's a CLI tool and you can deploy literally any function you've declared locally which StackHut then automagically available through HTTP, no web server routing or request handling required (on server side at least).
It's a super fun idea and it's surprisingly easy to get started with. If you've got some free time this weekend, totally check it out.
They seem to provide Chaos and Failover tests (à la Netflix Chaos monkey but out of the box.)
*or Azure S3's equivalent
It's harder to make something simple and powerful than it is to make something complex and powerful, so hats off to the Google engineers involved.
I understand and can identify with the why...but still, most global businesses want to target China.
I'm the sole creator of hook.io. I actually beat Amazon Lamda to market by over a month before they announced...so technically Lamda is a clone of my service! :-)
We are making a bit of revenue now and users really like the service.
We could really use an angel investment to push development forward. I've funded development for the entire project for the past 13 months out of pocket. We have no debt and almost no overhead.
Someone should invest in hook.io!
FYI: it looks like they have a really generous referral program, so post your referral link (shameless example: https://hook.io/christiangenco?s) and I'll tweet the developer to see if you can get retroactive credit.
I'm a bit of a fan of putting unnecessary stress on our servers in a controlled fashion. This gives me a bit of wiggle room if we need to free up resources in an emergency...which still hasn't happened yet. :-)
Why the don't use Lambda: https://medium.com/aws-activate-startup-blog/sandboxing-code...
... hile we were at it, AWS announced AWS Lambda, which looked like a potential remedy to our challenge. Upon further investigation, however, it became clear AWS Lambda is centered on an asynchronous programming model that did not quite meet our needs. The custom logic we run in Auth0 is an RPC logic: it accepts inputs and must return outputs at low latency. Given the asynchronous model of AWS Lambda, implementing such logic would require polling for the results of the computation, which would adversely affect latency...
For some use cases, Domino's API endpoints feature is an alternative, especially if you want to run R or Python scripts.
Here is a discussion:
> TaskMill's extreme microservices OSS platform. Check-in your scripts to GitHub and we turn each into an HTTP endpoint. See how we are building a Docker-powered Open Cloud where we build on top of each other's scripts to Automate All The Things!
disclaimer; I am the founder of TaskMill
1) It does not work with anything inside VPC. EG: RDS Databases cannot be used. This severely limits why you'd use this on AWS for larger apps.
2) Trying to get it working for a real app is a bit of a gun show. Debugging was a pain, you cant really write any unit tests easily that I found. Local development was confusing.
3) Lambda forces you to use old technology. Node .10x, Python 2x.
4) You're using code that will only ever work on AWS. If you don't like vendor lock-in then look elsewhere.
If you don't need VPC access take a look at Heroku and background workers. They scale independently from the front-end main app. No servers to manage, just git push heroku master. You can write them in pretty much any language you want. Heroku supports the latest stacks and languages.
But when it is available, it'll be a huge step ahead.
The idea is that instead of writing to a particular API it listens for events and mounts a filesystem. Then you just process files in the filesystem as your events and delete them when they are done.
The code can be found on github at: https://github.com/immuta/bakula
We need the ability to run 1,000s to 100,000s of ~3 second jobs as fast as possible, but only sporadically (~100 times a day with our current customer base). At that "low" frequency, can't justify the cost of keeping that many dedicated servers up 24/7 to meet this intermittent need, and we need sub-second startup times so we can't launch normal VMs on demand.
However, the cost of servers on Google Compute Engine (GCE) is actually cheaper than lambda if you have big batches of jobs. The VMs could be launched via their autoscaler or Dataflow, and jobs could be dispatched via Pub/Sub. If you can secure enough preemptible GCE instances (which are of limited availability), these are even cheaper. GCE VMs launch in about a minute, compared to AWS's 3-7 minutes, so on-demand launching is actually doable for some use cases as long as you don't mind paying the 10 minute minimum. This also gives you access to huge resources (CPU and RAM) if needed.
Specifically, ~1.1 million 3300 ms 1536 mb lambda jobs cost us $90.04, and GCE would be $50 on regular or $15 on preemptible GCE VMs for the same number of jobs in the /same total execution time/. After requesting a Lambda quota of 100,000 concurrent jobs, we were only allowed 1,000 ("for now", they say) -- thus it takes an hour for that many jobs to complete, so GCE$ = (1 hr) * (cost per vCPU/hr) * (1000 vCPUs). Note that we're using the max allowable memory on Lambda because that ~linearly decreased our execution time, but we use an average of 400 mb. My estimate also doesn't take into account the overhead time that comes from launching lambda jobs that might be reduced via the home-brew solution I described above.
Someone else mentioned the issue with Lambda only having 0.10 vintage node. We have a somewhat complicated albeit elegant solution for communicating with a portable node 4.x binary via IPC. If AWS ever gets to allowing specified node versions, that would reduce our runtime marginally.
(BTW, I put in a feature request to GCE for a lambda-like service here: https://code.google.com/p/google-compute-engine/issues/detai...)
I.e. wrap your code as if it were a lambda function, and pass it to us. It will work when we call it just as well as when you do in the original environment.
Sounds like a better name would be "Amazon Macro", though. This is because the "lambda" (user's application) likely refers to features of the hosting environment and is influenced by them; it is not referentially transparent to place code on a server, even if the grunt work of setting up the surroundings is done by a "macro".
"Lambda" is a hot buzzword though, as some notable Blub languages have been scrambling to integrate them, whereas "macro" has a stained reputation.
1. The snippets of code you run in Lambda are called "cloud functions".
2. Lambda applications are generally event driven and stateless, which is a very functional idiom.
As other folks mentioned, however, you can use Lambda with non functional languages (like Java). So even though a Lambda application is inherently functional (small stateless functions communicating via events), you don't have to write in a functional style if you don't want to.
It is currently a very early alpha. Today I have planned to hide all NATS references to have a very simple and agnostic API for components.