
Microservices without the Servers - alexbilbie
https://aws.amazon.com/blogs/compute/microservices-without-the-servers/
======
baconmania
This is Amazon's wet dream. Your app isn't an app at all, it's just a
collection of configs on the AWS Console. When and if the time comes to
migrate off of AWS, you realize you don't actually have an app to migrate.

~~~
arihant
Or, you realize that your processes are so minimalistic and well structured
due to lack of options, that you only have to write a custom request router
over the weekend to migrate.

Also, Lambda-like options are available with most PaaS providers now.

It is not much different and might be easier than migrating web app from a
custom PaaS. The only issue is, and has always been, the migration of data.
And I don't see that getting solved until some startup writes a bunch of
layers on top of a bunch of providers. It's very tricky, for good reasons.

~~~
IanCal
> Also, Lambda-like options are available with most PaaS providers now.

Which other ones are there? I used to use PiCloud until they were bought out
by dropbox and it atrophied. Shame, it was exactly what I wanted in a service.

~~~
_Marak_
[http://hook.io](http://hook.io) is an open-source microservice platform.

We launched a month before Amazon Lambda, and have better features like full
support for streaming HTTP.

~~~
georgefrick
But you don't have future price information; so it's a bit hard for an actual
enterprise recommendation. This will cost X in the future, but it's free for
now? Free for now is great for me as a tinkerer/developer; but I couldn't
recommend it to a client?

~~~
_Marak_
We do offer paid accounts, and in fact already have a nice size group of
paying customers.

Still in the process of establishing our service tiers, but our basic hosting
plan starts at $5.00 per month.

[http://hook.io/pricing](http://hook.io/pricing)

------
paulspringett
Interesting that the article talks about load tests but omits any results.

I was trying out a Gateway API + Lambda + DynamoDB setup in the hope that it
would be a highly scalable data capture solution.

Sadly the marketing doesn't match the reality. The performance both in terms
of reqs/sec and response time were pretty poor.

At 20 reqs/sec - no errors and majority of response times around 300ms

At 45 reqs/sec - 40% of responses took more than 1200ms, min request time was
~350ms

At 50 reqs/sec - v slow response times, lots of SSL handshake timeout errors.
I think requests were throttled by Lambda but I would expect a 429 response as
per the docs rather than SSL errors.

My hope was that Lambda would spin up more functions as demand increased, but
if you read the FAQs carefully it looks as though there are default limits.
You can ask these to be changed but that doesn't make scaling very realtime.

~~~
balls187
Correct. Lambda isn't designed for high data through put. That's what Amazon
Kinesis is for. Each Kinesis shard can handle 1000KB/s data injestion rates.
You would write your data to a kinesis stream, then use Lambda to respond to
the kinesis event to write data to your DynamoDB table.

~~~
paulspringett
Thanks for the info on this, I hadn't seen Kinesis before. I also tried
something similar with S3 upload but Kinesis looks a much better solution for
what I'm trying to do.

------
raspasov
I see a lot of people disagreeing with the overall direction of "less servers,
more services". I totally get it, I used to be one of those people, but I
think the shift to "less hassle development" is inevitable.

5 years ago people used to debate whether we should use a virtualized server
vs. a physical one. You still can see similar discussions but rarely - we all
have more or less agreed that using AWS/Rackspace/etc. is good for a business
in majority of use cases.

I think 5 years from now we'll still be debating servers vs. services, but the
prevailing wisdom will be that "services" have won.

~~~
alexro
It may well be so that companies will run their private clouds on the
colocated servers. What wins in that case?

~~~
raspasov
Maybe for some companies/use cases, however my feeling is that the setup time
and dealing with hardware directly will always be too much hassle for the
majority.

~~~
alexro
Except they will deal with hardware anyway - there are about several thousand
different devices in our offices. How difficult is it to get another 2 admins
- and two could be enough in the modern world of everything automated.

------
xchaotic
It is pretty cool but not really serverless, you are still handling http
requests via Amazon API gateway and in general you are relying and paying for
quite a lot of Amazon services. Not sure how much better this approach is to
serving image magic via PHP for example, it would be good to see some numbers.

~~~
philsnow
This removes nearly all of "devops". You don't have to mess around with
figuring out how many ec2 instances you need (or deal with auto-scaling
groups), how to secure the linux or whatever you stick on the ec2 instances,
etc.

There's still a ton of creating zip file artifacts of your lambda payloads
(instead of pushing to a magic git repository that amazon controls, say), so
there's a bit of "build monkey"ing to do instead of "devops"ery. But I think a
lot of shops will be happy to make that trade, as "build" is closer to their
core experience than "devops".

~~~
maratd
Yes, you get rid of devops.

You gain vendor lock-in. You are now tied to the Amazon platform. If they shut
down or suspend your account, for any reason, you are out of business. You are
also paying premium for the platform, with the cost of devops built in.

I'll take an open ecosystem that gives me options to migrate my business
anytime over a proprietary solution.

~~~
curiousjorge
pretty much this. amazon can at any point shut down any of these services or
nerf it. if you built everything up to this point on amazon and they shut it
down, it's more work.

~~~
maratd
To add to your point, they have done this before and are still doing it. There
is no guarantee of continued service.

[http://recode.net/2015/03/18/amazon-will-shut-down-amazon-
we...](http://recode.net/2015/03/18/amazon-will-shut-down-amazon-webstore-its-
competitor-to-shopify-and-bigcommerce/)

~~~
duskwuff
Amazon Webstore wasn't part of AWS, though. It was part of their commerce wing
- very different.

~~~
curiousjorge
remember PiCloud? they shut down after I spent a quarter building around their
API calls. I never want to repeat that mistake and you also get the bonus of
being able to sell your source code or deploy local cloud if an enterprise are
willing to pay extra for it.

------
manigandham
Are servers really that hard to manage these days? This seems like way more
work and pretty limited in what it can really do, especially compared to a few
lines of code in any decent web framework that can perform a lot faster.

~~~
herval
If you're a single developer/small team with a very small product, managing
servers is a chore that won't add any value to the product you're building.

So you either spend very little time on it and build servers adhoc
("snowflake" style - SSH in, install some stuff, etc), or you spend precious
time doing "the right thing" \- which right now is a huge universe of options
(Chef/Puppet/Ansible, Docker/other containers/no containers, etc).

If you're part of a larger team, not having a properly structured
infrastructure is a nightmare - specially when it comes to scaling or dealing
with failures of all kinds.

TLDR; - yes, I'd say it's somewhat hard...

~~~
joeyspn
> I'd say it's somewhat hard...

I don't think it's hard, it's just time consuming. And we all know that "time
is money", specially for small teams or solo devs (as you pointed out).

~~~
herval
Doing it right is hard. Scaling infrastructure throughout multiple zones while
keeping data as consistent as possible, deployments as easy as possible and
having as few SPOFs as possible is pretty difficult (and done differently by
every single team). The range of things that can go wrong is huge...

~~~
joeyspn
Not every product needs AZ from the start (specially for small teams or
solopreneurs), in most cases you'll be doing over-engineering. And in the use
cases you need AZs and _do it right_ new tools like convox* are really easy to
use and can save you a lot work. It's never been this easy to manage your own
infrastructure.

During the years I've used ssh, puppet, fabric, ansible, capistrano, cloud
formation, etc for managing servers and infrastructure. And I think that the
main benefit of any PaaS, AWS Lambda or AWS API Gateway is (obviously) that
they're time saving and abstract the internals. In fact I use them in several
_small_ projects.

* [https://www.convox.com](https://www.convox.com)

------
seiji
"Microservices without the Servers: the Uberization of IaaS as PaaS for SaaS"

Like when you say you have no carbon footprint because you don't own a car,
even though you call a taxi every time you want to go somewhere?

Are microservices different from SOA? Or is it just a more modern, streamlined
buzzword?

You say "microservices," but all I see is "omg, you realize inter-node latency
isn't a trivial component to ignore when building interactive services,
right?"

~~~
hyperpallium
Yes, microservices are just a rebranding of a SOA subset.
[http://martinfowler.com/articles/microservices.html#Microser...](http://martinfowler.com/articles/microservices.html#MicroservicesAndSoa)
But I think this direction is inevitable, and we'll soon see freemium
microservices.

Amazon's "Lambda" page (esp scroll down to the "benefits"
[https://aws.amazon.com/lambda/](https://aws.amazon.com/lambda/) ) shows it's
more like offloading some tasks (like worker threads in the cloud).

I had a play with the second (linked) app, SquirrelBin
[http://squirrelbin.com/](http://squirrelbin.com/) which can edit and run
javascript snippets. The latency is awful, 2-3 seconds for me (I'm in
Australia, but that should only add 200ms roundtrip or so). They seem to spin
up (reuse?) an entire instance for _each request_ \- it's incredible that it's
as fast as it is.

But the problem is the architecture of this specific app: the delay would be
fine if you could edit-run-loop code locally, without the cloud. But they
wanted to demonstrate quick development (for them) by just making a CRUD app,
using AWS Lambda existing http endpoints for PUT, POST, GET, DEL. So after
editing you have to save, load and run - and each one interacts with the
cloud. BTW the article about SquirrelBin
[https://aws.amazon.com/blogs/compute/the-squirrelbin-
archite...](https://aws.amazon.com/blogs/compute/the-squirrelbin-architecture-
a-serverless-microservice-using-aws-lambda/)

~~~
seiji
_They seem to spin up (reuse?) an entire instance for each request_

There are some clever platforms running on bare Xen (no direct OS) that can
spin up an entire instance and destroy it on every request pretty quickly.
[http://erlangonxen.org](http://erlangonxen.org) is a great example. 100ms to
boot your entire "system" for production usage.

------
jacques_chester
Here's how I deploy code, without having to modify it:

    
    
        cf push myapp
    

It figures out the language/runtime I'm using (Java, Ruby, Go, NodeJS, PHP),
builds the code with a buildpack, then hands it off to a cloud controller
which places it in a container. My code gets wired to traffic routing, log
collection and injected services. I can deploy a 600Mb Java blockbuster using
8Gb of RAM per instance or I can push a 400kb Go app that needs 8Mb of RAM per
instance.

I don't need to read special documentation, I don't need special Java
annotations.

I just push. _And it just works._

I'm talking about Cloud Foundry. It runs on AWS. And vSphere. And OpenStack.
It's opensource and doesn't tie you to a single vendor or cloud forever.

I worked on it for a while, in the buildpacks team, so I'm a one-eyed fan.

Seriously: why are we still talking about devops? _It 's a solved problem_.
Use Heroku. Install Cloud Foundry. Install OpenShift. And get back to focusing
on user value, not tinkering.

Disclaimer: I work for Pivotal Labs, part of Pivotal, which donates the
largest amount of engineering effort on Cloud Foundry (followed by IBM).

~~~
MichaelGG
As a note, I decided to look up CF based on this comment. This lead me to
cloudfoundry.org, which appears entirely devoid of content. Just useless talk
about "heavyweights" and so on. The menu didn't appear to have any links to
anything useful either. Clicking on products lead to a page with three product
names. Having visited the site, I'm actually now negatively disposed towards
it (but your comment outweighs my experience, and I'll still attempt to check
it out).

Granted I only spent a minute, but if this is a typical experience, I'm unsure
how anyone would come to the conclusion that there's any software worth using
there.

~~~
jacques_chester
Frankly, I agree with you. We suck at developer outreach. It bugs me.

Unless you know where to find the docs[0], they're not obvious. There's a
single master repo[1], but it's oriented at _deployment_ and works by
aggregating dozens of sub-projects[2] into a BOSH release and BOSH deployment.

... which requires you to know what the hell BOSH[3] is ...

So recently we started trying to make it easier. The best place to start
tinkering is Lattice[4], which is a cutdown extract of Cloud Foundry. or
Pivotal Web Services[5]. Or IBM BlueMix, I guess[6].

[0] [http://docs.cloudfoundry.org/](http://docs.cloudfoundry.org/)

[1] [https://github.com/cloudfoundry/cf-
release](https://github.com/cloudfoundry/cf-release)

[2] [https://github.com/cloudfoundry](https://github.com/cloudfoundry) and
[https://github.com/cloudfoundry-incubator](https://github.com/cloudfoundry-
incubator)

[3] [http://bosh.io/docs](http://bosh.io/docs)

[4] [http://lattice.cf/docs](http://lattice.cf/docs)

[5] [https://run.pivotal.io/](https://run.pivotal.io/)

[6] [https://console.ng.bluemix.net/](https://console.ng.bluemix.net/)

~~~
MichaelGG
Thanks for the links, much appreciated! How does CF compare to go.cd? Will
there be a lot of setup work required?

~~~
jacques_chester
go.cd fills a different role. Funnily enough go.cd was the main CI system used
for Cloud Foundry, though it's being steadily replaced by concourse.ci.

Cloud Foundry is a bear to install because you will probably wind up needing
to wrap your head around BOSH, the IaaS orchestration tool. Once you get past
that hump it's relatively obvious. Getting past the hump is tough.

Bear in mind that it's a _complete_ PaaS. The kind of thing you bet your
company on (and our customers do). BOSH is a heavyweight system that predates
a lot of later tools like Terraform or Cloud Formation. On the other hand, we
use BOSH to update Pivotal Web Services to the latest cf-release every 2 weeks
or so and basically, nobody ever notices. It just works.

The easiest way to start is either Lattice or a public Cloud Foundry
installation. The former has the advantage of being easy to install on a
laptop, and it's intended for developers to tinker with. The latter has the
advantage that someone else ran `bosh deploy` and is provisioning the VMs that
Cloud Foundry runs on. Pivotal Web Service (based on AWS) and IBM BlueMix
(based on SoftLayer, I think) are the two main ones.

~~~
vacri
> _Getting past the hump is tough._

So... it's not a solved problem after all? :)

~~~
jacques_chester
Oh you :)

You only have to install CF once, not every time you deploy. After that it's
easy to upgrade. We do so on Pivotal Web Services every time cf-release is
incremented, which is approximately fortnightly.

------
daviding
I'm playing with these exact things now and it is very enjoyable so far.

My main worry is not on the technical side but on how things are charged for.
If I build something that starts to get used I am covered in terms of
scalability, but not in a way that protects me from 'cost scalability' so to
speak. I know I can set up billing alerts and hit a big 'shutdown' button in
response to high load, but what I don't think I can do is throttle these
services based on the money I want to budget/spend. With my own services I
have a hard cost limit, with a hard scalability limit, or rather I just accept
that my response times will go down or fail once I've allocated all I can
afford.

If there something for AWS in terms of 'cost throttling'? It may be a gap in
their services, especially for people want to build things that might get
traction?

~~~
bpicolo
As a small user, I've bemoaned the lack of 'cost throttling' for a while. I
spend minimally and don't want to worry about e.g. private key leaks that cost
a fortune, or some malicious traffic hitting my s3 hard.

~~~
aluskuiuc
One of the easiest mitigations to this is to not even create credentials that
have access to do anything that could run up a bill in any short amount of
time. Between the Console (access protected with an MFA token) and IAM roles,
neither you or your application ought to ever have to handle raw AWS secrets.

~~~
bpicolo
Yeah, I do use IAM roles heavily, 2fa, etc : )

------
pea
Great to see Lambda stepping up their serverless game. We're big fans of this
approach and are hacking on something similar to this at StackHut[1], but:

* Mostly OSS to avoid lock-in

* Git integration

* Full stack specification (OS, dependencies, etc.)

* Python/ES6 support (Ruby and PHP coming)

* Client libs so you can call your functions 'natively' in other languages.

It would be awesome to hear what people would like us to build for them. Here
is a blog-post on how to build a PDF -> image converter:
[http://blog.stackhut.com/it-was-meant-to-be/](http://blog.stackhut.com/it-
was-meant-to-be/)

[1] [https://stackhut.com](https://stackhut.com)

~~~
Jake232
I think your pricing scheme[1] could put a lot of people off. I fall into the
category where I'd be fine on the free tier (< 10 private services), and yet I
don't _want_ a free service.

I know if it's free, then it's going to be under some kind of fair usage
policy, and you're going to rate limit me or have some kind of restrictions
eventually. There's no way it can be sustainably free if I start to push it
really hard. I'd prefer to just know the limits upfront, or have some kind of
usage based pricing.

[1]. [http://stackhut.com/#/pricing](http://stackhut.com/#/pricing)

~~~
pea
Hey Jake -- thanks for your feedback, that is really helpful.

We're going to add some better pricing. How would you like this to work?

\- per month, flat rate, ups w/ usage \- per request \- per compute / storage

We really like the idea of only paying for the compute you actually use a la
lambda; one of my gripes with Heroku was having to pay $x when the server was
only in use for short bursts. Why should I pay for downtime?

That said, we've actually had many people say they would prefer per month, as
it is more predictable and they are worried it could spiral out of control.

I would be super interested to hear your thoughts.

~~~
Jake232
I'm not sure that per-request would work; because the resources that a request
takes can vary wildly in resources used / time taken. PiCloud (somewhat
similar idea) used to charge based on processing time essentially (down to the
millisecond I believe).

I personally think that is the correct kind of pricing for something like
this; but monthly plans including X time/requests would likely be a good idea.

------
patsplat
The current problem with this architecture is the network cannot be used as a
security layer. Databases, search engines, etc need ports opened to the public
rather than to selected servers.

~~~
dikaiosune
If you're on a private network (like your own DC), I'd argue that network-
based security is a poor idea because then an attacker just needs to plug in
and have pretty easy access.

If you're on the public cloud, I'd argue that this is an even bigger problem
as you're then relying on VPC (or the equivalent) to always work correctly.

Why not ignore the networking and just build in robust security? Pubkey
authentication where possible, random long passwords where not? Retry limits
for clients, network intrusion detection, etc. To me, relying on the network
to keep you secure seems a bit like a crutch.

~~~
patsplat
This is an optimistic counterpoint.

However realistically nearly all persistence services such as MySQL, Postgres,
MongoDB, Memcache, ElasticSearch, etc either have been insufficiently hardened
as a public service or flat out are not intended to be used on a public port
and depend on the network for security.

There is not currently an option to connect an RDS database instance to a
Lambda function without opening said database instance up to the public. It's
a problem.

You are correct that SSH tunneling could be used to provide security but such
usage is not yet a standard approach.

~~~
twagner
Totally agree. It's our most requested feature on the Lambda team and a
priority to enable.

------
tw04
Awesome, right up until you need a feature they don't want to offer, or they
decide to sunset a feature you're the only one using, and you have absolutely
0 control over it.

~~~
_Marak_
If you are interested in a 100% open-source version of Amazon Lambda, you can
check out [http://hook.io/](http://hook.io/)

------
cdnsteve
Lamda does not work inside a vpc nor can it connect to one. You cannot use RDS
period. This severly limits options currently available from a database and
security perspective.

~~~
midnightjasmine
AFAIK the AWS team is working on this. It's one of the most asked for
features.

------
cptnbob
Too much vendor lock in. Will keep my VMs thanks.

~~~
saintfiends
Exactly. What if amazon decides to close your account, because you know.. they
can. Now you're pretty much screwed.

With traditional VPS you just point ansible/salt/puppet to new servers and
you're good to go.

~~~
cptnbob
Ironically this happened to me due to a card expiry fuck up.

~~~
saintfiends
Same thing happened to me. Card got expired but they wouldn't let us add a new
card (or payment method as they call it) because the account was in some
invalid state. When asked what it was, they couldn't give the details due to
legal reasons.

It took about 2 months with support (Business support) and finally they chose
to close the account.

We created a new account with a new card and migrated our AWS infrastructure.
Unfortunately we still have to use AWS..for now.

------
zkhalique
I came here expecting to read about "distributed computing in the peer to peer
network" and instead found a how-to for "servers-as-a-service" from Amazon.

Check this out instead:

[https://crowdprocess.com/](https://crowdprocess.com/)

------
amirmc
Folks interested in this might like to know that ContainerCon also had a
session on Containers and Unikernels.
[http://sched.co/3YUJ](http://sched.co/3YUJ)

A write up and audio from that session is also available.

[http://thenewstack.io/the-comparison-and-context-of-
unikerne...](http://thenewstack.io/the-comparison-and-context-of-unikernels-
and-containers/)

------
hackaflocka
This is a misleading title. Managed cloud services run on servers. There has
to be a better title. For a moment I thought they were proposing P2P hosting.

~~~
ahallock
The implication is that you don't have to provision any servers to execute
your code. That is the "serverless" part.

~~~
turing_bot_3c
But the name is terrible. It is like saying, going from A to B without a car
by using a Taxi.

------
droithomme
This article only makes sense if you don't know what servers are, and believe
"the cloud" doesn't use them.

------
loafoe
Beware, link-bait! Title should really be "Microservices without non-Amazon
Services", which if you remove the double negate really says "Microservices
with Amazon Services", which is well.. not that interesting IMO. I'd rather
write against CloudFoundry which abstract away AWS.

~~~
Animats
Also note the total absence of any reference to cost and billing. This isn't
free.

~~~
scottdw2
Lambda has a fairly generous free tier.

Here are the details on pricing:

[https://aws.amazon.com/lambda/pricing/](https://aws.amazon.com/lambda/pricing/)

You get up to 1M requests / month and 400,000 GB-seconds of compute time per
month.

A default lambda function uses 128 MB of ram (0.125 GB), which gives you about
3.2 M seconds of compute time (time actually spent executing requests) every
month for free.

Thus if you have functions that take 500 ms on average, and use the default
amount of RAM, you can process about 6.4 M requests in a month for a total
bill of $1.20.

Above the free tier limits you pay $0.20 per million requests, and $0.1667 per
million GB-seconds.

The pricing is fairly attractive.

------
balls187
I built
[http://vat.landedcost.avalara.com/](http://vat.landedcost.avalara.com/) using
this same architecture pattern.

The site is served up via S3, and the back-end logic is a Lambda module that
wraps a SOAP API.

------
jontro
I made a pretty cool lambda this week converting using mandrill inbound email
api, processing this through lambda, then posting it to my redmine docker
server. After a lot of fiddling (lambdas doesnt support x-www-form-urlencoded)
it now works great.

~~~
athrun
Have you seen this?
[https://forums.aws.amazon.com/thread.jspa?messageID=673863](https://forums.aws.amazon.com/thread.jspa?messageID=673863)

It's a mapping template for the AWS API Gateway you can use to convert both
HTML form POSTed data and HTTP GET query string data to JSON.

~~~
jontro
Yeah, I used a mapping template similar to this. Sorry for the late reply

------
sandGorgon
Is there a particular reason why Amazon chose JavaScript? I'm seeing more and
more PAAS services going nodejs first/only and am wondering if there's an
underlying reason.

~~~
twagner
AWS Lambda supports nodejs and jvm-based languages (Java, Scala, Clojure,
etc.) directly, and lets you run Python, shell scripts, and arbitrary
executables as well. We started with nodejs because it worked nicely for
expressing our initial launch scenario, event handlers.

~~~
sandGorgon
yes - I am aware that you support Python, etc. .. but nodejs is your first
class language. For example, even your docs only mention nodejs (and java
recently) [1]

what is _even_ more interesting is that you felt it worked nicely for
expressing event handlers. Can you talk a bit more about that - very
interesting to see why not something like python or ruby. I know that nodejs
is a callback-oriented _framework_... was it the fact that you can test
locally on nodejs consistently versus what would be the expected output on
Lambda ?

[1] [https://aws.amazon.com/lambda/faqs/](https://aws.amazon.com/lambda/faqs/)

------
drinchev
I was thinking... How can you use server less webapp with SEO-friendly dynamic
url structure, e.g. Ecommerce, social network, etc. does anyone have an idea
on that?

~~~
rev_bird
I don't think servers have a lot to do with this -- when you go to a URL, and
a page gets returned, why does it matter where the data came from?

For example: Say you've got an AngularJS app sitting in S3 or something, and
your backend is a Node.js app running in Lambda. Google finds a link to
"random-new-social-network.com/profile/drinchev" somewhere and tries to index
it -- their request is routed to "random-new-social-network.com," where
Angular recognizes "/profile/drinchev" as a route to a profile for some user
named drinchev, pulls in the "profile" template, and spits out your profile,
where Google can read it and call it a day.

If you're talking about search engines getting along with Javascript-reliant
sites, that's a different story, but I don't think I see the problem.

