
Serverless and startups - slobodan_
https://aws.amazon.com/blogs/aws/serverless-and-startups/
======
jasonkester
Which of the following is the more likely failure mode for a new product:

A.) So many customers wanted it that we couldn't scale fast enough.

B.) We ran out of time or money before we shipped.

The author appears to be worried about A, but in my experience it's B that you
need to think about when starting out.

Imagine if, instead of building any of those crazy 30-node-diagrams of an
architecture in the article, he'd had one guy build his entire product in a
day as the equivalent of the "20 Minute Rails Blog Demo". Then shipped it via
any of the thousand-odd boring ways to deploy such a thing.

He'd still have the same number of months to worry about stacking all those
blocks into that unmaintainable tower of pain, but in the meantime his product
would be out in the wild. Possibly even attracting the users that might one
day make such a silly architecture necessary.

As it is, he'll still ship one day. But my money is that he'll never see
traffic that would overload a single server.

Because that's what happens with 99% of the things one ships. The other 1% you
can fix as needed. Possibly using AWS Lambda for the pieces that need it.

~~~
slobodan_
I am not worried about A at all. This was a technical article, but here's the
explanation I posted to twitter:

Although I enjoy talking about technology, and I enjoy working with
serverless, it's important to note that using serverless for @slackvacation is
not a technology decision. It's a business decision. Not because the auto
-scaling, that's nice to have, but that's not a problem startups are facing in
their early phase. It's about financial incentives — also, about focus,
ability to move fast and test ideas, and the ability to grow without shooting
future self into the foot.

We still often fail to do things fast enough, but we are aware of the problem
and working hard to fix it.

~~~
jasonkester
It sounds like part of your decision was that Lambda comes out cheaper for
hosting. That also doesn't pass the arithmetic I use to evaluate such things.

A dev, fully loaded, will cost you $200/hour. A half cage at a colo with a
really fast machine in it will cost $800/month. So if you choose a stack that
adds 4 hours of work each month (or 80 extra hours upfront), you're behind.

Given that (as I touched on above), your chances of outgrowing a beefy box in
a colo (or its managed equivalent) can be thought of as zero until you see the
big success event that proves otherwise, my money is still on boring tech and
boring hosting.

Granted, Lambda is cool. I love building stuff on it for other people on their
dime. But as a guy who also builds businesses on _my dime_ , it remains a tool
for tiny niche cases that can safely go down on a saturday morning without
making me cancel my weekend.

~~~
slobodan_
Serverless is so much more than just Lambda functions.

~~~
bradenb
I recently heard the following analogy:

IaaS => Owning a home

PaaS => Bed and breakfast

Serverless => Hotel room

Lambda I guess would be like using the coffee pot in a hotel room? Don't worry
about power, don't worry about beans. Every time you come into your room it's
ready to go, just press "on".

------
thegeomaster
In retrospect, going serverless (using Serverless Framework) has been a
terrible decision for us.

First, exposing an API served by AWS Lambda has the infamous cold start
problem. It's not fun to wait a couple of seconds for a mobile app to respond
just because the request hit in an unfortunate time. One solution we found is
to use a Serverless Framework plugin to periodically ping lambdas to keep them
hot. But each concurrent lambda execution is a separate container, so you have
to anticipate a number of concurrent requests you will be receiving at peak,
or want to handle without a ~1s latency. Ouch - what happened to effortless
scaling? And Amazon API Gateway adds another 100-200ms of latency on top of
Lambda.

If you want to use an SQL database, you have to add your lambdas to a VPC,
unless you want to expose your database to the Internet. But if you need to
access the Internet from your lambdas, then you need a NAT (which you pay
for). And you get another couple of seconds to cold starts while AWS attaches
a network interface to your lambda. So, if you don't want this, you're stuck
with e.g. DynamoDB. DynamoDB is optimized for scaling but it's a poor fit for
relational data. Very basic support for indexes, no transactions, and data
model migrations are very painful. There is also no spatial data story for
DynamoDB, whereas if we had used Postgres, we'd be able to make use of PostGIS
which is awesome.

The tooling is terrible. Serverless Framework is very rigid and riddled with
bugs - we ended up maintaining our own fork of Serverless alongside forks of 3
plugins and another custom plugin we wrote to support our very simple
workflow. There's no faithful offline reproduction of the API Gateway ->
Lambda environment. We frequently ran into issues when the code is deployed
that wouldn't show up when testing with a "simulation" plugin locally.

There's a lot of other problems we ran into, but these are the biggest ones I
could think of. It didn't help that we didn't have much experience with this
technology before deciding to serve our API using it (this was probably our
biggest mistake). I guess we should have stuck with what we're faimiliar with
- normal, "serverful" apps. And with the DyanmoDB provisioned capacity costs,
I'm not sure we saved that much in the end.

~~~
matwood
Unless you are going to run your db on the same server as your server
application, are you not going to have a NAT regardless?

The Dynamo complaints isn't a Serverless issue, but a NoSQL issue. Hopefully
NoSQL first/everywhere is finally dying as people realize all the benefits of
a RDBMS (and shortcomings of NoSQL). And, if you knew you were doing something
with geo data, picking Dynamo was a tool selection mistake from the get-go.

~~~
thegeomaster
Our geo data needs still don't require a GIS, but we're slowly getting to the
point where they do. At the time, it was unclear if a GIS would ever be needed
- the company and product have done a 180° turn, as startups usually do. In
hindsight, DynamoDB was a terrible choice, but we didn't have that hindsight
back then. Had we stuck to the tools that we knew how to use, we'd have made
the right choice, which is what I was trying to say in the parent comment.

------
thinkingkong
This is great marketing but suggesting some symbiosis between serverless and
startups is odd. Startups for the most part dont have scaling issues that are
experienced on the server. They’re usually people, process, and financial
issues.

I see posts like this and wonder how many people will use some new paradigm
because people say “its fast” when they still have no traction (not suggesting
the author doesnt) or they dont even understand the scaling properties of
their software or business.

~~~
tnolet
This completely depends on what the startup is doing. For my use case (company
name in profile) auto scaling is a god send due to individual users being able
to influence how much work our jobs backend needs to do. This impossible to
calculate and plan upfront so we totally rely on Lambda for this.

------
arnvald
I think I fail to understand how Lambda works better for startups. I see
primarily 2 arguments.

The first one is simplicity. You pack your Node.js application, upload to
lambda and it works. But then in the article I see a chart with API Gateway,
then Lambda, then SNS and then Lambda again. How is that simpler than
deploying a single Node.js application on DigitalOcean?

The 2nd argument is scalability. I see how this is relevant, I'm wondering
though how often it becomes really useful. How many products experience
unpredictable spikes in traffic that cannot be handled by a single server
costing $250/month? (that's one c5.2xlarge server on demand).

I believe FaaS and Lambda are very useful, but I think I miss the point why
would people move their whole applications there.

~~~
matwood
Have you done much with a framework like Serverless? The simplicity comes in
because you can start writing business logic functions immediately. Plain Node
typically uses a library like Express or Hapi. While not complicated, it is
_one more thing_ that is boiler plate and doesn't provide any business value.

If the app needs a queue or messaging, sticking to plain node does not really
change the need.

~~~
simplify
Express and Hapi seem simpler than the Serverless framework you're replacing
it with. You have to specify that _one more thing_ [0] anyway, don't you? I
honestly don't see how this is simpler.

[0] [https://github.com/pmuens/serverless-
crud/blob/master/server...](https://github.com/pmuens/serverless-
crud/blob/master/serverless.yml)

------
joekrill
I would think the biggest concern with "Serverless" would be vendor lock in.
If a large portion of your SaaS product is "serverless" it's going to be very
difficult to move when company A raises their prices, or company B comes in
with a much more compelling product or price point.

Admittedly I don't know enough about the details to know how big of a deal
this could be, or whether there is work toward a universal "serverless"
standard. But I don't want Amazon/Google/Whoever suddenly raising their rates
2 years from now causing my startup to go under because the margins are so
tight. And for those that say this will never happen, look at what Google just
did with their Maps pricing.

~~~
petra
>> But I don't want Amazon/Google/Whoever suddenly raising their rates 2 years
from now causing my startup to go under because the margins are so tight.

What kind of businesses run on very tight margins, that even some increase in
the IT bill can cause them to fail ?

~~~
joekrill
> What kind of businesses run on very tight margins, that even some increase
> in the IT bill can cause them to fail ?

Well, Startups! That's sort of common for startups, isn't it? I realize the
term is basically being used to describe almost all "small businesses" these
days, but I think it's fairly common for startups to have very short runway.
That would (ideally) get longer as the startup matures. But I don't think it's
uncommon at all.

------
tnolet
Next to the nice drawings and good tips on testing I 100% agree with the
message. Lambda has enabled me to run a fledgling business and I’m porting my
last Puppeteer EC2 workloads to Lambda as we speak. Node 8 and the new Layers
feature made this possible.

------
rococode
I was recently talking with some folks at an incubator about a website I was
building, and one of the technical guys suggested I consider switching to
serverless before launching (and I thought it was a good suggestion).

Serverless is an easy way for startups to overcome one of the harder technical
challenges: scalability. This is purely anecdotal, but the majority of
startups I've seen - including my own work - do not have the time or expertise
to build out robust auto-scaling systems. They also don't have the money to
dump onto a bunch of servers they can fall back to when needed.

But auto-scaling is arguably more important for startups than for well-
established companies. Thanks to the unpredictablity of some random high-
visibility influencer or journalist sharing your product without notifying
you, it's easy for smaller startups to suddenly get hit with traffic that they
can't handle with whatever infrastructure they have in place. Sometimes you
only get one shot, and if a hundred thousand people hit an empty 503 page on
their first visit, they may not come back for another try. Serverless design
greatly mitigates that problem.

~~~
noink-com
> Serverless is an easy way for startups to overcome one of the harder
> technical challenges: scalability.

But is that kind of scaling really the hard part? If you're already using AWS
for example, it is trivial to set up an autoscaling group that will add more
EC2 instances the keep up with the rate of traffic. IME, scaling data stores
is the hard part of scalability.

I'm not saying serverless doesn't make it a little easier, but it is a
different tradeoff. AWS Lambda for example has a lot of limitations too. Maybe
I'm interpreting your comment wrong, but I don't think it's fair to imply that
serverless is the obvious choice for startups just because it helps overcome
scalability.

~~~
vazamb
I think the better scalability does not come from lambda itself but because
you have to design for share-nothing concurrent executions from the start.

------
pmattos
A recent take (from Tim Bray, a veteran engineer) of why you should go
serverless whatever possible (around 8:30):

[https://youtu.be/IPOvrK3S3gQ](https://youtu.be/IPOvrK3S3gQ)

------
qaq
Honest question how is it easier for a startup vs DO with few VPSes and say
manual DB fail-over.

~~~
adzicg
Two things, based on my experience (we migrated from Heroku to Lambda in 2016,
so been there for a while)

1\. you don’t need to worry about reserving capacity, so you don’t need to pay
for growth you expect to happen, or worry about not meeting a spike in demand
if it happens

2\. Most of the operations stuff (apart from packaging) is included in the
price, so things like monitoring, alerts, dead-letter queues, traffic shifting
beween canary versions, failovers, balancing... and it’s priced per request,
so this comes to effectively free if you don’t have a lot of traffic.

The big catch is that you don’t control the containers, so there’s no session
stickiness. Getting the benefits from Lambda requires re-thinking how you do
sessions and storage.

~~~
qaq
That the thing though the dev overhead and limitations for startup with avg
compute needs might be more $ than tiny savings (e.g who cares if it is
$200/month vs $70/month) if it will cost 50k extra in dev time to architect
for lambda

~~~
scarface74
How is “architecting for lambda” harder than traditional architecture?

~~~
falcolas
Because you're immediately having to go to a MSA, which immediately adds a
whole new layer of complexity, monitoring, network latency, API coordination,
and configuration to otherwise simple webapps.

~~~
scarface74
In the case of .Net at least, you write your API like you always do and the
SDK provides a wrapper. You can have multiple actions/controllers just like
you would with a traditional Web API project.

[https://aws.amazon.com/blogs/developer/serverless-asp-net-
co...](https://aws.amazon.com/blogs/developer/serverless-asp-net-
core-2-0-applications/)

You test locally just like you would any other WebAPI project.

~~~
time0ut
Yup. You can do the same in the other languages as well if you want.

------
jlangenauer
I do wonder if, in a few years time, if we're not going to be seeing a new
genre of technical blog posts: "How we migrated off serverless to reduce our
costs"

~~~
arnvald
It's already started: [https://medium.com/coryodaniel/from-erverless-to-
elixir-4875...](https://medium.com/coryodaniel/from-erverless-to-
elixir-48752db4d7bc)

~~~
martimatix
The article talks about moving away from a serverless architecture due to the
cost of API gateway.

I wonder if the fact that application load balancer can now invoke lambda
functions could've made serverless viable. [https://aws.amazon.com/about-
aws/whats-new/2018/11/alb-can-n...](https://aws.amazon.com/about-aws/whats-
new/2018/11/alb-can-now-invoke-lambda-functions-to-serve-https-requests/)

Mind you, the post was written in August 2018 and the above announcement was
made in December 2018.

