
Pursuit of Serverless Architecture: What if we didn't need app servers anymore? - obiefernandez
https://medium.com/in-pursuit-of-serverless-architecture/what-if-we-didn-t-need-an-app-server-anymore-f5b6586c58f4
======
gue5t
You don't need to run a "backend" if you're trying to empower the people who
use your software rather than just hold them captive.

Who ever thought hacking meant "building centralized services to sell to
plebes"?

~~~
Sanddancer
Does the FSF own their office space and own their generator, or are they using
Electricity As A Service and a Building As A Service. For a lot of people,
maintaining a server and service is much much more hassle, and much more
distracting than the payoff would be, and thus, contract it out. I have quite
a few non-technical oriented friends who would not be able to figure out the
decentralized solutions that have been bandied about, nor do they care. If you
want something decentralized, make something better, rather than just
bitching.

~~~
tremon
Look into the Freedombox project. They are building something better. It aims
to provide a personal server for everyone.

------
chowes
Those interested in serverless "backends" should check out JAWS
([https://github.com/jaws-framework/JAWS](https://github.com/jaws-
framework/JAWS)), as they've done a great job building out a framework around
this very idea.

Apparently they've got a big update coming Monday, so keep an eye out.

EDIT: Not sure where to put the quotes... "serverless" backends, serverless
"backends", both, or none? :P

------
jchrisa
If you are also a little disappointed that this is not no servers but someone
else's servers, maybe you'll like this project which really is serverless.
[http://thaliproject.org/](http://thaliproject.org/)

------
thejerz
AWS Lambda is awesome -- for about a month, I was head over heels in love with
it. However, the language support turns out to be a huge issue. If Javascript,
Python, or Java 8 isn't the right tool for the job, you're stuck. I hear Go,
and then Ruby, support is slated for 2016. I am optimistic that as time goes
on, language and binary support will make this the future of computing.

~~~
zwily
I wonder what "go support" means. Would that not just executing a binary? If
data is passed to it via a file handle, wouldn't that mean any executable
would work? Or will you give it source code?

I know you don't have the answers, just stuff I think about when go support
for lambda is discussed.

~~~
azylman
I'm just guessing based on my experience with Amazon's Kinesis Client Library
([http://docs.aws.amazon.com/kinesis/latest/dev/developing-
con...](http://docs.aws.amazon.com/kinesis/latest/dev/developing-consumers-
with-kcl.html)), but it probably means that you write something as a package
with certain public methods, and they have a wrapper around it that knows how
to invoke it. That's how KCL integrates with the languages that it supports.

------
cdnsteve
Before you go planning to build everything on Lambda, understand there's still
no VPC support, so no RDS databases. "AWS Lambda functions cannot currently
access resources behind a VPC."

~~~
shawn-butler
As I recall that was announced as coming before the EOY 2015.

I wonder if anyone from AWS can confirm.

------
javajosh
It's not serverless if you use Amazon; it's just that you're using Amazon's
servers. There is hardware, and there is the signal, and then there is a chain
of software between the two. That is all there is in computing, and all there
ever will be.

~~~
oaktowner
I think what he's getting at is that he gets backend functionality without
ever deploying and managing instances himself (similar to what App Engine
offers).

You just deploy your code, and Amazon automagically handles scaling, etc.

~~~
matthewrudy
I think it's the pay-on-demand bit that is crucial.

Heroku is fine for that, but they spin your instances down, so the first
request may well time out.

------
x5n1
What is serverless? Serverless is simply on demand pay per operation billing
for servers.

------
powera
Google App Engine. This is what App Engine does.

~~~
spotman
This is newer and shinier, and not all use cases overlap, but at the end of
the day, yes its someone else managing the execution of your processes.

Having worked on GAE in a very large scale, my gut thinks that while this has
many different use cases, it will make sense for small to medium size
projects, and there will be a tipping point surrounding cost and this will
appear less like a silver bullet after some horror stories.

At small scale this may even save money, if you truly don't need your code
running all the time, and can deal with latency of starting your code. For
projects that think terms of N thousands of transactions per second, the math
breaks down real badly. It would cost approximately $2,500 dollars to run a
100ms job a billion times in a month on a machine with only 1.5GB of allocated
to this process. You could pay $4350 per year w/ reserved pricing (is there
even reserved pricing w/ Lambda? Couldn't find it.) for a c3.4xlarge, which
has 30 gigs of memory and 16 cores, which could easily process many more times
work than a billion 100ms jobs per month. So lambda seemingly works out well
over 6x more than rolling it yourself on EC2 with my overly conservative
napkin math.

The other pain point these type of systems produce is that instead of scaling
a server here or a few servers there, your thinking in terms of processes, and
time and time again this becomes a very rapid way to not only provision more
horsepower instantly, but also run up the bill instantly. Code that might have
ran on 1000 processes on traditional infrastructure because you had to tune it
to fit in that infrastructure (and I don't mean fine tuned, things like,
setting your database pool correctly and having basic concurrency), can
accidentally be scaled up to 10,000 processes and appear that everything is
fine. Unfortunately have seen this quite a bit on GAE and lead to some very
costly mistakes.

Processes are harder (in my humble opinion) to have a good feel for what costs
what, especially if your scaling up rapidly. Instead of adding a few servers
and understanding how that effects the bill, going from 1000 to 1500 processes
happens much easier, and you pay for that convenience.

My .02 is that this would be ideal if your either in a hurry and money is no
object, or you don't plan on growing past a certain size, or possibly for
workloads that are truly not running that often. I think this whole craze of
lets replace everything with lambda is going to lead to rapid development
(awesome) at high prices if your project takes off, which will leave
developers rushing to get off of it if/when their project needs to scale with
financial constraints. (not awesome)

~~~
HillaryBriss
Agree.

With AWS, once you start using multiple services (e.g. lambda, S3, dynamo db,
etc), it's difficult to estimate/model the costs in advance.

First you have to estimate how many times your app is pulling down a file or
hitting a given db table. But this is dependent on unpredictable user
behavior. You don't really know what users will do with your app until they
actually do it.

Then, with e.g. Dynamo DB, you have to think about how many indexes you have
on a table, and how often your app will query those vs how many times your app
might do a full table scan. You have to allocate these specially defined AWS
resource units. I don't find it intuitive.

I wonder how many developers really know in advance what their app is going to
cost per user.

------
cdcarter
The title may be a little bit funky, but guys, this is a great post to lead
someone through the nightmare that are the Lambda/API Gateway docs and setup
process. I tried doing something similar about a week ago while not operating
at full brain capacity, and could not figure it out. This is a great narrative
of how someone figured it out, and it was great to follow along and actually
learn.

------
Sarki
Big misunderstanding here: it's not "server-less" (as in without-a-server),
it's Serverless (as in "these products we're trying to sell through HN"...)

You need your users to be able to share information between them? You will
need a server at some point. Period.

Call it miraculous AWS or whatever, it's still a server.

------
blazespin
Someone was recently credited for an idea that has just been around forever?
"Fred George, a former colleague & mentor at Thoughtworks is widely credited
as one of the originators of the concept of microservices."

~~~
cdcarter
I think Fred George was working on this kind of stuff back in the 70s.

------
sanatgersappa
Firebase does a good job with this model. As does Parse.

~~~
kennywinker
They do a pretty good job, yes. I've been building something on Parse, and the
biggest issue I've run into is that they don't allow for npm modules. Smaller
modules can be ported to parse cloud code, but if you have an interesting
module with native code or just many dependencies, this becomes unworkable.

Firebase doesn't have a Cloud Code like offering at the moment, which can be
very limiting depending on what you're building.

~~~
gfosco
Now we actually recommend you run a full node environment on Heroku or another
provider, and use webhooks. See [https://github.com/parseplatform/parse-cloud-
express](https://github.com/parseplatform/parse-cloud-express) and
[https://github.com/parseplatform/cloudcode-
express](https://github.com/parseplatform/cloudcode-express)

------
callesgg
I was going to make a joke about the there no longer being a need for servers
as we have the "cloud".

Then i read the article...

------
nathancahill
"Adding a backend to solve problems with a static site. Now you have two
problems."

------
omphalos
Reminds me of nobackend.org

