Hacker News new | comments | show | ask | jobs | submit login
Serverless Architectures (martinfowler.com)
477 points by runesoerensen on July 18, 2016 | hide | past | web | favorite | 202 comments



One of the bigger problems with serverless architecture (beyond catastrophic lack of good debugging and development tools) is the idea of managing multiple users, working on multiple code branches, and all needing environments that somewhat closely mirror production. This leaves servless as a decent way to hook an event callback to some AWS event (new file uploaded to S3, etc) but IMHO not anything that approximates any kind of business logic - or anything that will need to be iterated on by more than one developer. I feel it is a massive reach to market this to anyone at this point, and if this catches on, would really make me hate programming compared to what local development in VMs feels like - where I have much greater tools. It's inefficient both for the developer and at runtime.


I completely agree with you regarding debugging. Especially with Lambda. But App Engine allows you to connect a python REPL directly to your app in the cloud. It doesn't get any better than that. And I've noticed that because mobile developers don't get to pick much to develop to (the device maker dictates) they seem to move much faster than your typical server-side developer who bit twiddles VMs, Dockerfiles and distributed services much more than they admit with the time left to write actual functionality much lower. Too much choice is also a prison.


First we need a common standard for the serverless pieces of code to talk to the app server and gateway... maybe we could call it a common gateway interface. Then maybe someone will write an apache module for it.


I laughed at the CGI reference, but TBH pub/sub + microservices feels really awesome and clean so far. Combined with DB as a service it's even better IMO.


I was thinking DBaaS as well, which is the part that's usually under-emphasized. NoSQL for specific tasks and (sharded if needed) relational with stored procedures for anything complex. When you think of Postgres as its own (heavily optimized) server, it changes the game. For most apps, at that point you just need a passthrough layer exposing endpoints, which is usually a dead simple nodejs app behind a balancer. The rest of the heavy logic is then offloaded to the client. This is why full-stack makes sense now more than ever. It's getting so easy there's almost no excuse anymore.


>This is why full-stack makes sense now more than ever. It's getting so easy there's almost no excuse anymore.

There's a plenty of good reasons to avoid SPAs.


I'd like to hear one. Server side rendering if/where needed, don't overload on NPM modules, drop jQuery entirely, react lib served by a CDN. SPAs fail in the hands of amatuers. They excel as a way of instituting rigor in dynamic site development when wielded by people who know what they're doing.


I created Lite Engine[1] specifically as an alternative for this. Being able to POST and execute queries saved on the backend enables A LOT of functionality. Sqlite because it's simple to manage and allows massive multi tenancy for very low cost. And performs acceptably well for small/medium size work loads.

[1] https://www.lite-engine.com


Could you expand on this a bit? Are you using Oauth for microservice authentication, or some such? How specifically does this address the parent comment. Honestly curious.


I think you're right that one of the big downsides currently for AWS lambda would be dev tooling, however one of the big upsides is that the code you throw up there should run "forever". This is potentially really useful if you are simply working on frontend code, and hitting these Lambda endpoints as a client.

I've set them up a few times for contact forms and other light pieces of functionality for static websites, and it fits that niche really well. Obviously, I think the serverless movement has some grander notions of the size and scale of applications that could be constructed, so we'll see.


With that in mind, serverless architecture seems to be a micro-optimization that is of the greatest benefit to enterprise customers whose scale is large enough to benefit from reducing their server load to the barest minimum, at the cost of greatly increasing development costs.

I don't see how this would be much fun to work on in a team. We have enough trouble her squabbling over who gets to use the staging server next.


I'm pretty sure that if you're considering using Lambda, you'd want to automate creating and deploying the Lambda functions in such a way that you could create any number of them for each developer to play with.

That said, the problem with debugging stands (for AWS lambda, at least).


There's certainly a need for offline prod-like environments/harnesses for serverless development. However I don't think that's a criticism of the architecture, simply a reflection of the immaturity of the style.


Why is serverless particularly bad for this?

If anything, it is easier with serverless, because you can branch pretty cleanly down to the single function level.

All you need is a level of indirection, usually at the domain and pathing level.


Second that. There's ELB + EC2 which can give you auto-scale but data persistence needs to be handled separately. There's AWS ECS which is/was quite unwieldy but I am hopeful for GAE's Flexible Environments (beta) built on Docker containers, putting it midway between Docker and Lambda with some interesting features https://cloud.google.com/appengine/docs/flexible/


check https://hyper.sh container-based cloud


The vendor lock-in alone is enough to make BaaS dead on arrival. Some seem to lock you not only on specific platform APIs but also on a single programming language(i.e. javascript) like it wasn't worse enough to have a single language on the client.


I like to think that vendor lockin is something I care greatly about, for the correct reasons. In the case of AWS Lambda however, it doesn't apply.

Serverless architecture isn't a "lockin" problem. You don't develop your app with serverless in mind, thinking that you might some day go back to serverful. Serverless is a core design of (some parts of) your app.

When I hear people say they're trying to "be generic", "abstract away serverless architecture" or some such, I'm replacing "serverless" with "asynchronous", or "functional", and it sounds bloody ridiculous, because it is.

If competitors to AWS lambda arise, I do hope they'll have a similar architecture available because that's where the power of Lambda resides. The "lock in" is only whatever those fake WSGI objects lambda is passing to the function are, as well as internal triggers for the servers (SNS, S3, ...). Those things can be abstracted away if competitors offer similar mechanism.

Trying to abstract serverless+serverful together is ridiculous. If you're successfully doing this, it sounds like your app doesn't belong in a serverless environment in the first place.

PS: There's plenty of drawbacks which are vendor-specific, not tied to the serverless design itself. For example, the Amazon API Gateway is atrociously bad. AWS Lambda doesn't even support Python 3, only Python 2.7 (get your shit together Amazon). Those are things I'd switch to a different vendor for in a heartbeat... if there was actual competition.


Parent post didn't say anything about abstracting away "serverless". Lock-in refers to (as the OP article discusses) the fact that if you use AWS Lambda, you are tied to their APIs and deploy mechanisms. If you want to move to Google's FaaS offering, you have a lot of work to do, and may need to restructure your app logic as well.


And that's wrong; a complete misunderstanding of how both serverless in general and Lambda specifically works.

You provide an entry point to your application. AWS provides you with an `event` and `context` object, both of which are nonstandard. You extract, out of those objects, the data you care about... and you pass it down to whatever it is you want to run.

The extraction is very simple. Here, take a look at a SNS-triggered lambda:

https://github.com/HearthSim/HSReplay.net/blob/e12a6b8217b3b...

One line of code is vendor-specific. I could move to a different vendor in a heartbeat. Yes, bits and pieces are using SNS/S3; some of which are already abstracted away, the rest which can be easily replaced.

And frankly, when dealing with platforms as large as AWS, the vendor you choose is a big deal. You're not supposed to change every week... so it doesn't matter if it takes a little cleanup to move to another vendor. If it's taking you more than a few hours, your code is making assumptions it shouldn't about the underlying architecture and that's your fault, not amazon's.


I don't think your code example supports the point you're trying to make. Of 3 lines of non-logging code, one of them is AWS-specific, and so would need to be refactored to use another FaaS platform. That's what lock-in refers to. It's not saying you cannot migrate at all, it's just saying that there's a refactoring cost to swap out the platform APIs.

If you can get the same message-delivery semantics from Google's message bus, then the refactor would end there, but if you happen to be depending on a feature that's not implemented on your new platform, then you might have to refactor your application logic further.

Furthermore, you still have to get your code into production, upgrade it, monitor it, and ensure that the other functions that it, and all of that currently tends to depend on platform-specific tooling.

Much of this is also the case in the VM-based PaaS world. But, there you have projects like libcloud that can abstract some of the differences away. That feature gap is pointed out in the OP article, which makes it clear that the lock-in issue isn't a fundamental defect with the functional paradigm, just a side-effect of the FaaS offerings being young.


I agree with this

If you've designed your client-side javascript (or swift or whatever it is) properly, you've abstracted out the API calls to say, AWS Lambda, and wrapped them such that the implementation is not tightly coupled to the calling code. Eg you write a generic Storage class and call Storage.Save, which wraps whatever API you use to save stuff, be it Lambda or otherwise. Change vendor and you just change your Storage class's implementation without needing to alter client functions..


You can run all the Python 3 code you want. Just execute it from the Python 2.7 environment. The AWS Lambda environment does support it as the AWS environment is just the basic Linux AMI, and then you can include anything you want into your Lambda code zip file. You can exactly replicate the Linux image they use for Lambda and create your own custom environment that will run your code. What more could you ask for?


I know, and this is far slower as it involves a serialization/deserialization pass on the data as well as a subprocess.

> What more could you ask for?

Native support. Was that not obvious?


You can't evaluate a technology by looking at drawbacks alone, you have to consider the benefits as well, and whether they outweigh the drawbacks.


Well, the point is that the benefits don't outweigh the drawbacks because of the vendor lock-in. Whenever you hit a limitation there is NOTHING you can do. No duct tape! Also on long term the vendor roadmap may not align with your business(i.e. pricing, features etc). Do you have a migration roadmap? Of course you don't... you are already too busy dealing with the BaaS restrictions/APIs. Definitely BaaS have their place along with PaaS, DBaaS and IaaS but if you are locked-in to a specific vendor your options become quite limited very quickly. Maybe when something like Docker for BaaS appears I would reconsider this. The BaaS is not really more than a framework with various constrains so there is nothing stopping you to implement your own BaaS on top of an existing PaaS(to avoid managing servers). I'm actually doing just this and it works well but I can see how Lambda or any other vendor offer would not work due the amount of customisation required.


For you. At the moment. "Dead on arrival" has a meaning, and this isn't it.


Well if I can't use it now due the constrains mentioned above it is dead to me. Would you bet your project on Google functions[0]?

[0] https://cloud.google.com/functions/docs/


That isn't what DoA means. "DoA" doesn't care for whether an individual can use it, it cares for the collective.

I'm sure Stallman couldn't use the iphone when it came out because of constraints, that doesn't mean the iphone ever was DoA.


I can't speak for the collective. As far as I'm concerned the current offers of BaaS are dead on arrival. Time will tell. I could see some value in the iPhone but I don't see much value in the current BaaS offers. Not enough to switch from the current alternatives. They are not transformative like the iPhone was. To complete your analogy we had smart phones before just like we have various BaaS services now.


You should understand a platforms general limitations before committing to it for any given piece of functionality. If you can't use it for something, don't. No one claims you can't mix and match. Use something like AWS Lambda only where it makes sense. Deploy your own hosts for more complex use cases.


You may be misunderstanding the concept of Lockin. It says you are stuck with whichever vendor you used that created the lockin. Benefits of services always seem to increase. So, whatever benefits exist for a current service isn't worth forgoing all potential benefits of all future competitors. That's what lock-in creates. That's why it's always bad for anything meant long-term.

Now, short-term projects that can disappear in a few years or less have less risk for lock-in. We still see those disrupted on occasion when the vendor abruptly disappears. Users had to redo their work, which wouldn't have been necessary in lock-in free approach.


I still see this as flawed. Avoiding all vendor lock-in means making every possible piece of software you write completely generic - no GIS from Postgres, no third-party queue/messaging servers/services - none of it.

We're "locked in" to AWS - in that it would take a massive effort to move to Google/bare metal, despite not using any AWS-written servers (that is, we're FreeBSD, MySQL, Redis, etc.). But the benefits of AWS have been huge for us, and anyone saying they're outweighed by the potential downside has no idea what they're on about.

The right tool for the job is sometimes a vendor with specific services that, yes, sometimes have a lock-in component, even for long-term use.

In this specific case this particular BaaS implementation may not be full-cooked enough to entice you onto it, but saying vendor lock-in is bad as a blanket statement or incontrovertible truth is just silly.


Heck, I've done mainframe-to-unix porting. The concerns about "lock-in" raised here are silly and trivial. If you're writing serverless code, it's already stateless and inside a well-defined and well-documented box. The hard stuff, always, is the business logic.

If you feel locked in, you lift-and-shift your serverless logic to a new platform. Write some shims and wrappers. It ain't that goddamn hard.


Getting locked into free software like PostgreSQL is a lot less risky than getting locked into a proprietary system whose owner could gouge you or evict you at your most vulnerable moment.


Exactly. There's slmost no comparison given a proprietary company can straight-up tell you "No, we will no longer sell you the product even if you will pay big $$$ for it. "

Not theoretical: Microsoft, Borland, IBM, HP, Oracle/Sun... all did it at least once with Microsoft doing it many times. They're not open, which drives up the cost of exiting dramatically.


Google should be on that list too.


Well that's my point! In this particular case(BaaS) the current implementation is not worth it due the lock in. The issue with the lock in is that you can't fix it yourself if it's broken. Maybe if it would be free and my project would be a toy I would consider it but as it is now it's not worth it. I also doubt you jumped on the AWS wagon on its day one. AWS has a wide range of services. With the right abstractions I think most of them can be easily replaced if you were to move to a different vendor or bare metal servers. BaaS is a totally different kind of animal. Did Beanstalk work well for you?


we use dynamodb heavily, among other services, and the lockin question always comes up. and its like, would you rather get to market a month earlier, or spend a month building something just in case one day it doesn't work - it seems like a no-brainer that lockin is often a cost to push off to later especially considering you may never have to pay it.


The question is how much are you willing to trade-off for the managed infrastructure benefits. The database is a level but BaaS is a totally different level. Next would be the programming language. Would you feel comfortable to base your project on a proprietary amazon programming language? (i.e See Apex of SalesForce). Maybe yes but I can't see this becoming mainstream anytime soon. It would be a backward step. I use some managed/proprietary services too but I always consider it a liability/short term workaround. Most of the AWS services were successful because they are based on open source/3rd party software (i.e. the OS, DB engine etc). BaaS changes totally changes that. It's all proprietary except the programming language which may not be one of your preferred one either. Google lost a lot of ground on cloud service because it promoted a proprietary platform(GAE). The concept was great but we can see how it played out. These are the reasons why I think the current BaaS offers are dead on arrival.


But it's not on a proprietary Amazon language! The three most common/dominant languages in the back end app development world are already covered. Why are you inventing a problem that doesn't even exist to complain about?

For that matter, any number of proprietary languages have been successful over the years. If it solves a problem, and solves it better and faster than reinventing the wheel, most businesses will use it. Time-to-market is a much bigger deal than abstract concerns about future platform change. I've been in this industry for 20-odd years, and the only things I've seen last are Unix and SQL. Everything else is new since I started. I sometimes work with bleeding edge, and I'm not afraid.


I didn't say there is a proprietary language. I just said that the current trend leeds to total lock-in. After BaaS,the programming language is the only open piece of your stack. I hope you can see the difference between licese per core and cloud services. The cloud vendor can render your code/business useless at any time. You control nothing on your stack. Some are comfortable with that but I don't think it's a good thing for the industry or society. Proprietary software is sometimes a necessary evil but it's not really the case with BaaS. At leat not in the current form.


Since the language isn't proprietary, there's nothing keeping me from moving the functionality to a new environment but laziness. Take the business logic encapsulated in the serverless function and wrap it into a microservice, and run it anywhere. Not that hard.


That sounds easy but in reality you end-up rewriting 90% of the code. BaaS/serverless by its very nature is tightly coupled with the environment(in this case proprietary ecosystem). I think you also underestimating the effort required to migrate a "system". Migrating a service it's not the same as migrating 90 of such things. You need to make them cooperate in the new environment(authentication, communication API etc) not only to perform their task(i.e. crunch some data or update an entity). The only signifiant thing that you have from the previous system is perhaps the specification(hopefully you have a such thing).It has nothing to do with laziness. It's just a waste of time and energy. You could use that time to improve the current system not to migrate it to a different platform. You end-up with a rewrite. You will see it when it hits you. I've been all for managed services but lately I've started to change my opinion.


90% of the code is locked to the vendor?

I've been using lambda lately and perhaps 5% of the code in my Lambdas is AWS-specific; the vast majority was rapidly extracted from existing projects, wrapped in a simple Lambda interface, and done.

While I don't advocate Lambda for everything, there's almost nothing there to lock you in - its a simple function that accepts two parameters. Even coupled with API Gateway, its just moving my routes definition to a slightly different layer. I could easily port any of this work back to any common Web framework with almost zero effort. I won't, because Lambda solves actual problems for me now.


There's a great assessment. Lock-in risk would be negligible in that case.


I've done far more exotic ports than that - wholesale platform rearchitectures (mainframe batch jobs to J2EE SOA, anyone?). I've dealt with obsolescence in some seriously painful ways. Nah, moving a bunch of serverless business logic in Python or JavaScript to a Flask or Node.js host? That's trivial. Authentication? APIs? Write a shim!

If infrastructure is really 90% of your code work and you can't salvage business logic in a platform change that doesn't require a new language, the problem isn't the serverless architecture.


It depends how tightly coupled is your code to the platform. You can't get more coupled than BaaS and thus the reason why I think proprietary BaaS is such a bad thing.


GAE is a great example of the lockin problem- they changed the pricing model, bills skyrocketed, and it was super hard to get out.


You can do AWS without lockin. You can do messaging without lockin. You can do Postgres without lockin. You just have a generic way of integrating it with your apps plus a backup option. Then, you can leave at any time with lower costs.

Any kind of practice without such setup means you've decided to let the vendor control your situation ftom that point onward. Historically, from mainframes to servers yo desktop to mobile, this always led to less brnefits overtime where new, better stuff emerged that locked-in customers couldnt use at all or benefit as much as they liked. Those that avoided lockin could. Worse, some locked-in got hit with price hikes or even EOL'd products.

So, I think the default should be justifying a lock-in product rather than the other way around. Ive seen situations where it made sense. My comment mentioned one. It just rarely does esp today with so many cheap, flexible, and so on alternatives to commercial lock-ins.


Lock-in is fine if it has value.


People keep saying that without a context. Non-lock-in usually has value too. Many times for similar or lower cost. And with way less risk.

So, the question instead is "With these requirements, which products and services meet them with good, cost-benefit analysis? And which is best when benefits and liabilities are considered?"


Everything is situational. Nobody is going out and buying an IBM Mainframe because of my comment.

I have seen people waste precious time and energy on "open" solutions, and lose focus on other, important things. I'd rather be stuck extricating myself from some lousy vendor relationship than not selling my product.


> You can't evaluate a technology by looking at drawbacks alone, you have to consider the benefits as well, and whether they outweigh the drawbacks.

For a particular use you certainly can. Some problems have hard constraints. Freedom from lockin is a common one.


I got burned by Parse on this already. One of the nice things about Zappa (https://github.com/Miserlou/Zappa) is that you can return to a non-serverless architecture if you want to.


Once you've written a bunch of code in a language, you're pretty much locked in - without a massive effort - in any event.


In almost all cases, there's a significant difference in the sort of risk that comes from B/FaaS dependencies and the sort of risk that comes from PL lock-in. The former leads to technology lock-in and vendor lock-in, while the latter almost never[1] leads to vendor lock-in.

Technology lock-in without vendor lock-in isn't that huge of a liability. A popular PL under an appropriate license isn't going to disappear tomorrow, and its creator doesn't have the power to hold your business at gunpoint. Conversely, B/FaaS providers upon which you truly depend definitely can extract (or by disappearing simply destroy) much more value than you originally anticipate.

[1] If you use Wolfram Language or Matlab then your choice of PL causes vendor lock-in, but most PLs aren't like this.


I agree - "vendor lock-in" seems like a valid point, although I don't know the details of many of these services. "Locked into a programming language" is something that will probably happen kind of naturally anyway.


> Once you've written a bunch of code in a language, you're pretty much locked in - without a massive effort - in any event.

The difference is one doesn't have access to Lambda infrastructure itself. IF I write an entire app in XYZ lang, I can still run it on my own server or make it an executable and call the code through RPC. If I base all my work on a third party vendor infrastructure I can't replicate it requires an even more massive effort to migrate to something else, because now I basically have to re-write my app from scratch. FAAS may have killed BAAS but isn't going to kill PAAS.


Why would you have to re-write your app from scratch? I'm assuming most of your actual business logic is in the Lambda code and the data model. Lambda wraps things pretty tight anyway, so just write a shim layer, and lift-shift your code to a new platform.


Completely agree with this. There is one project[1] we're working on which aims to provide said shim, but it's still early days. Developer tools will emerge to solve the major issues around serverless.

[1] - https://github.com/iopipe/turtle


I don't follow. You can write other modules in different languages. The data sources are language agnostic. Nobody forces you to do use a specific programming language in the first place unless the platform is close sourced. The point is that with a close source BaaS you have no control on the tools or costs or availability(i.e. deprecation policy) of the service.


Locked in maybe, but not vendor locked in if the language is open source.


Yeah, I guess less so now, but before C# was open sourced the only viable option was Windows servers and people usually used SQL Server with it. So in that case many people chose to lock themselves into that vendor and stack. So in my mind Lambda as a back end is no worse than that in terms of vendor lock-in. I suppose that Amazon will be able to provide support for many years given how much of the internet currently runs off of them.


(ahem) Serverless v1.0 is multi-cloud and multi-language...

https://github.com/serverless/serverless/tree/v1.0


I completely disagree that it's DOA. In fact, I think it's clear that is the future of development.

Like any piece of the software stack, there are providers focusing on services that are proprietary, open source, and a mixture of the two. I am personally most excited about the hosted open source providers, because it gives dev teams immediate access to the technology, but the option to bring it in house when and how it makes sense.


I didn't say the technology is DOA. Just the current offers which seem locked in to specific vendor implementations. The current BaaS offers are too intrusive and just not good enough to worth the switch from a PaaS to a BaaS.


> good enough

To whom? I see many people who need a trivial amount of server logic. Right now, they can do one of 3 things:

- Use a random site who may or may not radically change their business model in the new few years. - Pay $5/month for a VPS, then get forcibly dragged up the learning curve of becoming a full-time sysadmin - Pay pennies for Lambda

Is there a reason to re-write your backend? No. Is there a reason to consider it on your next project? Possibly.


> To whom? I see many people who need a trivial amount of server logic. Right now, they can do one of 3 things:

Since any site "may or may not radially change their business model" at any time, you really only list two substantive options:

(1) Use an VPS, or (2) Use Lambda

But, really, there's a more choices:

(1) Use a VPS or IaaS (with full sysadmin overhead), (2) Use a PaaS (like Google AppEngine) and avoid sysadmin overhead (but probably have more code than a "serverless" function host like Lambda), or (3) Use a "serverless" function host like Lambda

You probably don't want to do #1, since for low-volume and trivial logic its (1) comparatively expensive, and (2) high sysadmin and programming overhead compared to the other options.

#2 is lower overhead (particular on the sysadmin side), but still more than #3. But possibly cheaper (e.g., if the requirements fit within the free quotas for AppEngine.)

#3 may be the best solution for some things, but its not as stark as a choice as you present it to be.


Well for AWS Lambda, they offer three languages: Java, Javascript, and Python. Even then, its really just running in a container and you have access to anything in their standard Linux AMI and whatever else you include with your code, so you could really use almost any language you want. I'm using wkhtml2pdf with my Python code which is a c++ executable. Also, although the Python is 2.7, you can easily use the existing Python 3 environment or include your own.

As far as lock in, it is trivial to utilize the same code on your own servers. I do that for my Python code for development using a simple Cherrypy setup in less than 100 lines of code.


The language lock-in issue seems to be mostly an issue with Lambda at the moment, and I suspect it will resolve itself over time. As it, even on Lambda you have the choice of JS, Python, and Java, which is a lot of flexibility. I expect we will see "official" Go, Ruby, JVM languages like Clojure and Scala, etc.

If you can't use some obscure hipster language, that might be a problem. But my biggest gripe at the moment is lack of Ruby, and I'm sure Amazon will fix that.


This is not a concern if the platform is open-source (like RethinkDB's Horizon) and you can choose to self-host.


RethinkDB doesn't seem to be a general purpose BaaS. I definitely don't think BaaS is a bad thing. It's just that the current flavours are not worth it. They are mostly some kind of MVPs.

Edit: I'm not very familiar with Horizon. It looks promising but I assume it's tightly coupled with RethinkDB?


That's true for things like Parse, but much less true for Lambda.

For the most part, Lambda code is just normal JavaScript/Python. It would take fairly minimal effort to take that code and package it up to deploy on a normal (non-Lambda) server.


Then why not have standardized APIs? You could just switch implementations, then, if you stop liking your current vendor. (Or maybe the future lies in call-by-meaning architectures and automated adapter generation?)


Someday a standardized API will become popular enough that AWS will need to support it, but they aren't going to do anything to hasten it.


Shouldn't serVER architectures be/get easy enough that the flexibility/transferability far outweighs whatever the BaaS's are providing? Is it really that hard to run a server in this day-in-age?


Depends what you're doing. If your code is simple but serving at scale is the problem (Twitter?) then I can see why this kind of architecture appeals.


I'm cautiously optimistic that the vendor lock-in will fade away over time. Hopefully open source frameworks can provide a jQuery-like abstraction layer over all of the vendor-specific implementation details. Of course, there are lots of challenges when you start getting into deeper architectures - queues triggering layers of lambdas interacting with various caches and data stores. But I we'll see best practices emerge that help people keep things simple and flexible.

For testing it should be possible to run everything in a mock vendor system on your local machine - not that such a system exists today, but theoretically it could.

Overall, my sense is that Serverless architectures will never be useful for everyone. They'll always be better for smaller, simpler systems that don't see tons of traffic. The Serverless community should focus on these use cases. The thing is, there are tons and tons of apps that are over served by all the flexibility EC2 provides. And there is another set of unborn apps that haven't been created only because the barrier to setting up and managing the backend was a _little_ too high. I'm really excited to see serverless bring these apps to life.


I actually think the opposite. Serverless architectures are the future for most of the app servers that need a language agnostic API. Maybe I didn't get it right but to me BaaS is just a PaaS with a common API interface(i.e. authentication, input/output serialization, cache etc).


There's a pragmatic definition buried in the article... if you can't spin up your app in 20ms and run it for half a second effectively, it's PaaS, not serverless.


I truly hope that people stop misleading the mass with buzz words. We already have too many of them being used for clickbait. I don't get why a simple client-server architecture would not suffice the "serverless". Software engineering and architecture design used to be cool in the past with useful design patterns. Now it's full of tricks/buzz words that promise the users of silver bullets that help them to manage code and servers. In reality, no such silver bullet exist and such tricks/architecture fail every day. I've noticed that martinfowler.com is one such place that seems to provide such empty promise of a silver bullet!


I am reading about this trying to think what it could be useful for. (I am sure there are some, I don't see any use for it with my work).

"Serverless" seems to be the pricing model. It still runs on a server.


We used to have serverless architecture... it was called standalone desktop apps! No servers, no backend, no nothing, just you and the code. It was great!


It wasn't great. Updating apps was a huge pain, which meant that bugfixes were fewer and farther between. You couldn't sync preferences/settings between computers. Remember getting a new computer or wiping one? You knew you had hours of work ahead of you, just reconfiguring it. Did you write a shell script to do that for you? Too bad, it did something weird along the way, and now you have to spend hours figuring out what's wrong.

And don't even get me started on installing dependencies...


It honestly wasn't so much of a bother. Updates were really the only pain point but then programs received updates periodically at best. (Talking dial-up era here) The cost of distribution motivated developers to release much higher-quality code, relatively-speaking, as not doing so cost you more (in terms of disks+shipping, additional bandwidth, etc) and well as earned a much greater amount of ire from your users.

If you were smart you had a system for dealing with migrating between computers. Maybe it was a script, maybe you knew which folders to copy, maybe you were extra careful with where you saved everything. Maybe you didn't care.

Some of the modern shit is nice (I fondly</s> remember setting up a personal Active Directory+Exchange server to get a private LDAP sync, compared to Google now) but as a whole the current model encourages carelessness and mindless profiteering. There's too much of the equation nowadays in control of people who want to charge you for it, whereas before the user was had control. I miss the days when it was considered rude to require a network connection unnecessarily.

Plus, nowadays a product you love can turn to mush and there's nothing you can do about it save stop paying for it (and lose its services in the process), which will never amount to anything unless people do it en mass. Features regularly disappear because some bullshit business reason or another, or some user metric said only 2% of people used it. Well, fuck those 2% guys right?!? Before, if I didn't like your patch I could just revert to the old install discs. Nowadays I have to pray you don't alter your deal further.

edit: and don't get me wrong, as a developer the evergreen model is great. But as a power user, I hate it.

edit2: not to say older software didn't have its crap. Lots of fussy stuff to deal with. But compare the initial release of Windows 3.1 to Windows 10...


There's a big difference between Paas, where you hand over control of your backend. Compared to Faas, where you just outsource a small part of your infrastructure.

FaaS solutions like algolia, getsentry, getstream.io. mapbox, layer, keen.io, twilio, stripe etc only run a small part of your application. They scale well, are more reliable and often more cost-effective than an in-house solution. If you're ever not happy with them it's easy to switch to an alternative. If Algolia no longer works for you, simply switch to Elastic. If you're not happy with Mapbox, use google maps. No longer like, getstream.io, use Stream-Framework. It's relatively easy to change since you're not handing over your entire backend.

Also see my post on HighScalability about this: http://highscalability.com/blog/2014/12/22/scalability-as-a-...


Twilio, Stripe, etc. aren't what the article is referring to as FaaS (which, incidentally, I've never heard of outside this article but seems pretty descriptive). They're talking about things like Amazon Lambda, Google Cloud Functions, and Parse Cloud Code, where you write code that runs on a third-party service. The entry & exit points of the code are under the control of hosting provider, which distinguishes it from PaaS, where they normally take a well-known platform that'd work fine on your laptop and then perform hosting and administration.

Think of it as the API vs SPI distinction that we used to have back in Windows-land. API-only services like Twilio or Stripe are usually just called SaaS.


With the downside being the lack of cohesion with the rest of your system(s).


I'm early adopter of serverless applications, few weeks ago was released a tool that make manage our Amazon lambda applications even easy.

https://github.com/jorgebastida/gordon

A few examples can be found here, and it's a good point to start if you're using Aws Lambda

https://github.com/jorgebastida/gordon/tree/master/examples


It's sometimes okay to use inexact or uninformative terminology, but it becomes a problem when words are used to mean their exact opposite. So if you are going to talk about serverful programs - those which depend on servers and can't operate without them – that's fine, but please use some term other than 'serverless'.


Would have liked to see some real cost analysis, other than it should save you money, because that is only true if scale is small to medium.

That's the thing, I guess the demographic that lambada is going for it will save money, but along with the mentioned vendor lock in, when you really need to have something on and processing requests non stop the costs savings really evaporate.

It's weird to me that the serverless thing catches on when it seems to be best suited for projects in their infancy without need for 24/7 request processing , or things out of the critical path like image resizing or things of this nature. But entire apps? Feels very much like going back in time to shared hosting.

I could see it saving money until about 1m requests a day and then a sharp drop off where you are hemorrhaging cash after that...

With modern auto scaling systems that aws and gce provide I bet most shops don't stand to save any more money than with a normal modern day refactor.

At the end of the day this leaves you more rope to hang yourself with. It makes it easier to run slow or poorly thought out code. 1000 processes might be cool but is never as cheap as 2 working correctly that accomplish the same thing, as an extreme but actual example of a situation I helped unwind with GAE, which was serverless before the buzzword.


Aren't 'serverless' PAASs just a reinvention of PHP hosting?


Adrian Cockcroft: "If your PaaS can efficiently start instances in 20ms that run for half a second, then call it serverless."

https://twitter.com/adrianco/status/736553530689998848


No, call them extremely fast lifcycle servers.


That's a lot more syllables than "serverless".


I see similar parallels, but where as PHP shared hosting often meant one server handling multiple sites upon request, FaaS seems to be more generic with multiple (virtual) server instances handling multiple site functions upon request.

So while each site was a PHP monolith on shared hosting, a FaaS architecture could be many smaller units of code, spun up on demand to handle a specific need of a service, possibly across multiple servers.


A better comparison might be PHP on MediaTemple's Grid service circa 2006 (https://techcrunch.com/2006/10/17/media-temple-crushes-share...)


Well they are. I've always asked myself why there is no such service..then it was GAE and then docker and here we are!


No. For example, Lambda has no service endpoint for call/response. The platform starts a function container, and passes events to that function instance until the queue is drained.

It is not at all a client/server model unless you wrap API Gateway around it as an event source.


It's so true that it hurts to think about it!


We're currently exploring lambda for certain applications as well. My coworker wrote up a nice little post about it:

https://product.reverb.com/2016/07/16/deploying-lambdas-with...



Previous (recent) discussion at https://news.ycombinator.com/item?id=11921208


I submitted this link: http://martinfowler.com/articles/serverless.html#drawbacks with the title "Serverless Drawbacks", as that was the recently added section. Guess someone changed it, but it's not a dupe :)

In any case, it seems that "Benefits" has also been added since the last HN discussion (revision history: http://martinfowler.com/articles/serverless.html#Significant...).


For my first serverless project, I built a service that manages an advertising account, by analysing ads and optimising the spend spread for a configured goal. I was surprised to find that one of the most difficult aspects, was in throttling outgoing API requests. Basically, I needed to avoid abusing a 3rd party service by hammering it with parallel or rapid fire requests.

I discovered all kinds of weird solutions:

- Push a "please wait" message onto an SQS queue, and set a Cloud watch alarm to fire when the queue length was >= 1 for at least one second. Have a variety of lambdas respond to the alert by checking if the have any work to do, and then racing to pop the queue and do one operation. I think I needed SNS in there too, for some reason. Fanout?

- Build a town clock on an EC2 and have it post "admit one" tokens to SNS. Nice and simple, but I was evaluating serverless, and adding a server felt like defeat. Might as well just run the whole thing on that EC2.

- Have on lambda pop request descriptions from a queue, and sleep for one second after sending each one. The request responses would have been pushed onto another queue. All of the other lambdas could just handle the events they cared about as quickly as data became available. Sleeping in a lambda just felt wrong, but for one second, it probably would have been fine.

In the end, I was still experimenting and exploring when my estimate loomed near, so I just wrapped up all of the application code and piped it all together with delays in loops that made API requests. Deployed it to Heroku in a hurry with minimal fuss.

What I learned:

- There are bad use cases for serverless, and I had started on one (SLS doesn't wait on no one)

- Serverless does encourage a clean architecture, from a code perspective: I was able to quickly transform it into a completely different type of application.

- AWS's services have lots of little surprises in their feature set: SQS doesn't fire any kind of event. SNS can't schedule events in the future (maybe I should have expected that one). Cloud watch events can only trigger a single lambda.

- There is a need for a town clock / scheduler as a service in AWS'. How many people have built that themselves?


> "- Push a "please wait" message onto an SQS queue, and set a Cloud watch alarm to fire when the queue length was >= 1 for at least one second. Have a variety of lambdas respond to the alert by checking if the have any work to do, and then racing to pop the queue and do one operation. I think I needed SNS in there too, for some reason. Fanout?"

Did you want AWS Kinesis? The integration with Lambda polls the queue every few seconds and sends batches of records to a Lambda function when the queue has work to do. http://docs.aws.amazon.com/lambda/latest/dg/with-kinesis.htm...

When you evaluate the AWS serverless platform, I recommend looking at everything that is an event source, sink, or processor: Lambda, SNS, SQS, SWF, SES, Cognito, IAM, Kinesis, IoT, API Gateway, S3, DynamoDB.

It's a pretty rich palette to work with.


I did look at Kinesis too, but for my use case, it didn't really help much over SQS. There is still no way to put a Kinesis event "in the future" so that I won't see it for a second, and I would have had to set the batch size to 1, if I was using Kinesis as a means to throttle an activity.

In the spirit of treating this like Lego, I suppose that maybe I could have uses SES to email itself an HTTP request description, and counted on the email lag to keep requests spread out over time. But at this point, things are getting so weird that I think it's time to just admit that "work on something at a specific pace, then go to sleep for a few hours" isn't the sort of task that the serverless architecture is well suited for. Serverless is for tasks that you want done on demand, as quickly as possible, imo.


I'm no experienced Lambda user either, but here's a couple helpful things I've found:

* I think API Gateway can throttle requests. Not sure if that was part of your pipeline or if your lambda had a different trigger.

* There is a Town clock SNS event https://alestic.com/2015/05/aws-lambda-recurring-schedule/


Ah yes, I forgot about that. What I really needed from the clock was for it to somehow only trigger lambdas that actually had work to do. I didn't like the idea of them polling once per second per function, when 99℅ of the time, they would have nothing to do. I guess "town clock" was a mistake on my part.


Sort of related, I've implemented a GitHub pages based serverless app demo, that uses GitHub API to execute git operations from the browser. Though slightly weird, I really enjoyed this experiment, and think there might be something to this concept.

https://github.com/idoco/GitMap


"Serverless" WTF? I thought it would be a cool article on peer to peer mobile apps that don't rely on a backend server, something I think will be very popular in future for a variety of reasons.

Please, let's not accept this redefinition of a fundamental and well known term.


I expected the article to be about the goodness of some peer-to-peer architecture that doesn't require central servers, not a twisted definition of cloud computing.


One of the big issues here that is only very very slightly glossed over is the security you give up. The only sort of security filtering you get is AWS' WAF, which is considerably weaker than a firewall like mod_security with a default ruleset, or even apache's .htaccess. Inbound filtering, you're limited to a tiny ruleset with only a few conditions, and outbound filtering doesn't exist at all.

This lack of firewalling continues up and down the stack. As such, it's a lot harder to create rules regarding any API calls you make to third party services, harder to audit how your app interacts with any datastores, and generally an administrative nightmare. It may be useful for some apps, but just feels like a nightmare to maintain for any decently sized setup.


The sales pitch is that the fabric vendor does the administration, and people like you are out of a job.

Because the framework infrastructure is supposed to solve all of that behind the curtains, so developers cannot shoot themselves in the foot. (Other than through excessive resource consumption)


Did you look at Lambda scheduled events?

http://docs.aws.amazon.com/lambda/latest/dg/with-scheduledev...


This may definitely be a concern for some organizations, but I think there's a lot of focus on serverless being all about apis and web requests. Serverless is also a great way to manage stream processing, and job queues - where these issues are less relevant.


I recently evaluated these, and I'm dying to use something like this. I would really love to be able to do this with sandboxed apps written in arbitrary languages like Go, Haskell, Rust, or even C++. Ideally, I wouldn't be managing docker containers or VMs to get certain jobs done.

Unfortunately, it's limited to JavaScript, Python, and Java for now. Google Cloud Functions is similarly limited to JavaScript.

I know that these languages were chosen because they can be easily sandboxed, but it would be nice to support something more generic without having to create docker containers for everything.


Have you checked out https://github.com/iron-io/lambda. Their is a tool for docker container creation, and also AWS Lambda compatibility.


So apparently "serverless" means you don't have to requisition a cloud instance to run the service.

I remember this movie the first time around, when it was called "virtual hosting". What a revolution -- transitioning from needing a Unix box running Apache to run your PHP e-commerce site to having your host take care of that for you -- and everybody else.

Just because you're doing something old "in the cloud" now doesn't make it a new thing. Just like you can't append "...with a computer" to an existing method and get something patentable.


Does it matter whether something is new or not? The fact is there are services such as AWS lambda that people are using and talking about, so we need terminology for that.


I think it does matter in a lot of circumstances.

Take NoSQL which was hyped as something new 5 or 6 years back. Because of that everyone under a certain age decided that they had to use it.

The older more cynical of us realized that this was how people has been writing software before SQL came along, and realized we were throwing out a whole host of features and benefits because this was the "new" way of doing things.


Not really, as long as it's useful. But call it what it is -- hosted services. Don't make up a new, nonsensical name.


Hosted services doesn't really differentiate what they are. DynamoDB is a hosted service. So is EC2 and ECS.


Then call it function hosting or business-logic hosting. "Serverless" means "there is no server" which is demonstrably false. Or should I assume that words don't mean things?


The main point of serverless is cost - it is not just "Reduced operational cost" but orders of magnitude difference if one were to build cloud solutions that can scale the the same level and pay for cloud infrastructure (even if not used at minimum something must be running ...), monitoring, and people effort involved on ongoing basis (real cost)

Compare to working with level of abstraction of "entity" (function, could be bigger) and have it running when needed and scale as needed.


One big challenge is the expense that is otherwise avoided via usage of connection pooling. Without the hack to keep the backing containers alive, tearing down and restarting them is expensive and wipes out all hot path optimizations made by JIT compilers besides having to reestablish connections and pay the handshake prices. Apart from the simplest of use-cases at this time, imho this is not an efficient model if sustained performance and low latency are of interest.


I've been wondering this same thing. We use gunicorn to handle our web requests and it pre-loads our code so it's ready at the drop of a hat to serve a request. It takes a few seconds to load up our site, so wouldn't Lambda have to pay that cost with every web request?

I've been going under the assumption that it's just not the right use case for Lambda, though I'm hoping I'm wrong.


If you have any kind of real load, there skills always be instances running. If you have less load than that, you are not a case worth optimizing for, because you pay less than a few hundred per month.

Also, small instances are cheap to keep around, and as you pay per request, it's in the providers best interest to make that efficient.

Well, that's the sales pitch, anyway...


I wanted a good tutorial/walkthrough on this topic.

Has anyone gone through https://pragprog.com/book/brapps/serverless-single-page-apps?

The example material looks good. Overall it could be a bit short.


What a ridiculous and strange definition this is.

Its an architecture that relies on multiple, supposedly distributed services, all of which are hosted on servers...

Why not just call it "Multiserver" instead.

If they mean "containerless servers" or "microservices", then why don't they say so?

If they mean "distributed servers" why not call it that.

If they mean that the client relies on multiple services (each hosted on different servers) why not just call it a thick/fat/stateful client?

Even a database is a remote service (if not embedded), that is run on some sort of server.

Calling anything serverless is just ridiculous as long as it depends on services on the network.

This smells really bad of just another attempt at marketing a pointless definition just to get more business.


The phrase that Amazon used when launching lambda was 'deploy code not servers'. To me this sums up what 'serverless' means. It means the developer doesn't have to worry about servers in any way.

With AWS Lambda/API Gateway (and arguably with Google App Engine before it) you take away the toil of having to:

  * Manage/deploy servers
  * Monitor/maintain/upgrade servers
  * Figuring out tools to deploy your app to your server
  * Scaling an app globally.
  * Coping with outages in a data-centre/availability
  * Worry about load-balancing & scaling infrastructure 
So obviously there are still servers there, but they are largely invisible to the developer.

I think this is more than just a marketing gimmick. It is part of a big change in application architectures.

At one end as a small developer of of web/mobile app this can considerably reduce the amount of code/maintenance you need.

At the opposite end of the spectrum the likes of Google (Borg) and Facebook (Tupperware) have developed their own in-house solutions where by servers are largely abstracted out as an entity that developers need to worry about.

Managed docker services (e.g. Google Container Platform, Docker Cloud) are another approach of achieving a largely 'serverless' goal.

(edits for formatting)


> So obviously there are still servers there, but they are largely invisible to the developer.

This would only be true if the abstraction were not leaky. Frankly, upon a very cursory inspection, my opinion is that AWS Lambda leaks like a sieve.


I would call that a deployment strategy, not an application architecture.


No! That's EXACTLY the point - this is EXACTLY what it isn't - a deployment strategy.

Docker vs Ansible is a deployment strategy.

Relying on others to make your code run on a server is VERY different than doing it yourself. It's practically the difference between a company with a DevOps team and one without it.


So my job is:

Write server-side code as part of my application, targetting some platform API. Once finished, invoke some deployment routine.

The service provider's job is:

Receive a request from me to deploy my server-side code and keep it running on some server cluster.

That's not a serverless applications architecture because the application contains server-side code.

If you don't want to call it a deployment strategy, fine, I won't insist on that name, even though in my view using a service provider for deployment deserves to be called a deployment strategy.

More generally, it's an approach to make my server-side code run once I'm finished writing it. Call it what you will.


I agree. It is preempting definition use from a true serverless architecture: a swarm of symmetrical client/servers that implement an application and distributively store application state.


I agree. Someone is still hosting the servers and services. It is just someone else. It is also just a re-spin of the term Application Service Provider. That term never gained popularity.

There are some discussions of API's. Whether or not a service has an API is orthogonal to something being a hosted service.


No, the idea is simple - the servers are invisible to the user, he doesn't know what or how many are in use.

Since the servers are completely abstracted away, they could have just as well not be - hence serverless.

An analogy escapes me but I sure there are some.


So it is about how the user perceives the application?! If so, why is it called "an architecture"?

Or do you mean that the user don't have to care on how to configure the application? Then "Zero-configuration" is a better name.

Or do you mean that the application hides that it use services on the net? Then it is even worse.


> So it is about how the user perceives the application

The user here is the developer. Here's an explanation from the article:

> "some amount of server-side logic is still written by the application developer but unlike traditional architectures is run in stateless compute containers that are event-triggered, ephemeral (may only last for one invocation), and fully managed by a 3rd party ... One way to think of this is Functions as a Service."

FaaS is probably a better name.


I also dislike the term serverless. It's confusing, doesn't relay the information it should. I can't suggest a good replacement, but it's painfully obvious that even here, most people commenting don't actually know what serverless means and what its implications are.

Serverless really is an auto-scalable distributed infrastructure, perfect for small CPU-intensive tasks in an app which doesn't know how many CPUs it'll need at any one time.


> they could have just as well not be

That's a weird logical leap.

Let's make the servers not be by unplugging them. How's the abstraction working now?


I think you're misunderstanding. I read it as "they could just as well not be servers". It could be Diamond Age rod logic. It could be processing implemented on quantum foam. It could be gnomes. The point is that from the user's perspective, they no longer have to think about servers.


Sure, I get it. But we already have that idea.

The whole idea of a server is to abstract away the details of the upstream computer and its software stack, so the user can think of it only as a provider of an abstract service. The irrelevance of their implementation as a host, VM, box, rack of boxes, pool of quantum foam, string and sparkles, is already built into the concept.

We all know that there's no unique process that is http://amazon.com. It's a service, provided by a (vast distributed) server.


You use "server" to mean "a vast array of servers"? I don't think I've heard that before.


I'd say that http://amazon.com is the address of Amazon's web server, yes. Does that sound wrong?


I don't think I have ever heard anybody use "server" to mean "lots of servers". Perhaps "web site" or "service" or "cluster" or even "load balancers" depending on whether they want to talk about what they provide or what does the providing.


Now that it has a name, it can develop a hierarchy of experts, annual conferences, and a consultancy bonanza for a few years.


I think Fowler tends to do this in general. Here's a comment from 2 years ago I posted along the same lines: https://www.reddit.com/r/programming/comments/2051mx/microse...

(And yes, 2 years ago I was a little more violent with my wording. Time has taught me things.)


This article wasn't written by Fowler.


In this context "server" doesn't mean "entity listening on some network endpoint, providing some service to network-connected clients" (which is I think the definition you're assuming). Instead "server" here means "Unix Machine".

Thus "serverless" means software and humans writing and deploying said software that do not need to know anything about /etc/<whatever> and installing packages and disk partitioning and swapping and whether Python 2 or Python 3 gets installed by default this week and ...

IMHO a good thing, mostly.


So my car is engineless if I never look under the hood?


I think this architecture is public transportation in that analogy.


Your car isn't engineless because when its engine dies, you will know about it and have to do something. S3 is serverless because if one of the servers storing my files die, I have no way of even knowing it happened.


S3's had downtime, and had data loss incidents. S3's only serverless if you believe Amazon's infallible.


I didn't invent the terminology. Like many many things in this business the terms used are often confusing/wrong/overloaded/re-invented-from-the-1980's and so on. Part of the landscape unfortunately.


Maybe worry free if you don't need to look under the hood. If the manufacturer would lock the hood you would never know :)


They actually did lock the hood with the Audi A1. Just had a flap through which you could top-up water and oil.

Perhaps we could adopt 'A1' instead of the confusing term 'serverless'...


As a DevOps guy the definition makes sense to me. I've got several parts of my operation that require me to spin up 3 - 30 instances to handle parallel load. I'm always monitoring queues to see if I'm scaling correctly. If I went 'Serverless' Amazon would handle that whole chunk for me. That's approximately 20% of my job taken care of.

I think to an end-user the definition is meaningless, but to its intended audience it fits.


I think it's basically a stateless PaaS/microservice with a set of constrains/APIs on top. The BaaS/Serveless architectures also seem to have their own sub-categories based on the constrains/target. Some are fat(i.e. a kind of full blown PaaS with a common/advanced API gateway on top like parse.com) and others are lightweight(i.e. amazon lambda) meant to run as a function(e.g. update a record on event changes). In the end they are various abstractions that hopefully will improve the security and help you to manage your infrastructure easier.


You're missing the point in that _you_ don't have servers. Just as cloud is "someone else's computer" -- serverless is someone else's server.


He's not missing the point. He might have a preference that technical terms have semantically useful meanings.


That's true for all types of cloud providers, but we don't call them serverless.

This is marketing bullshit for microservice architecture. It just sounds better with some added philosophy.


I imagine serverless is a powerful buzzword because it implies cost savings on servers.


There is google's appscript(based on javascript), which could also be termed as serverless. So ingenious for individual purposes, I have used it as a database(with google spreadsheet), scraping and saving 1000's of html/xml, deploying and demoing bootstrap websites. It also has a time limit of 5mins 30 seconds which could be overcome by controlling triggers or by using patt0's CBL http://patt0.blogspot.in/2014/08/continuous-batch-library-up....


Aren't we kind of already there (serverless)?

A lot of web frameworks enforce the shared-not-that-much (not quire "shared nothing", but close) style of programming where every HTTP request is essentially just an "event" that hooks into a controller-handler? And if you want anything to stick around from one request to the next, you keep it in memcache/db/BaaS/third-party service, not your own process?

I don't think this is as radical a shift as many here might think, especially given the move to containers is already well underway.


Kind of like "schemaless".


I think a major problem with Serverless architecture, as much as it is awesome, is that to manage it you would need to be already familiar with the underlying systems that provide it. Hence, it would imperative that you already have worked with the architecture you are planning on replacing.

Unless there is some elegant factor to Serverless that I am missing. I just can't shake the feeling that at the end of the day this is just a sales tactic.

Nevertheless, I opt to use serverless BaaS solutions whenever I can afford to.


Why is it called serverless if it uses Amazon Lambda instances?


Because you, the developer, do not buy or pay for "servers," only ephemeral "code invocations."


Sweet! So by that logic, all static sites are serverless because I pay for "file hosting" right?


Serverless sounds like CGI evolved past needing a server anymore.

Oh yeah and remember spawning CGI processes for every request never scaled well. :-)


Lambda does keep processes running and reused between requests if they're coming quickly. You do, however, only get to handle one request at a time which is not the norm with node backends.


One issue people are talking about is vendor lock-in. I would like to note that this drawback is only valid for non-prototype software. For prototypes, hackathons, or "one time use" websites, these BaaS tools (such as Firebase) are essential.

Here is a case study: Last year I was working on an SMS app, QKSMS [1]. We offered a premium version for $2. We did a promotion on reddit where we gave away 10k free premium versions. So, take a second and ask yourself: how quickly do you think you could implement a promo code system + a website for distributing codes for one time use?

We did it in about 5 hours. It costed about $100. The website was (and still is) statically hosted on Github. [2] The website source code is ~22 lines of JavaScript. It pulled 12 promo codes from Firebase; and when a promo code was removed, it would (in realtime!) collapse it from the list and display a new promo code at the bottom.

The mobile app code was also very simple. First, check if the promo code is available (one API call); if so, enable premium and remove the code (another API call).

The reason why it costed about $100 is because we had too many concurrent users: the Firebase free plan allows only 50 concurrent users, and at our peak we were seeing ~500. Since the promo was only for a day, we bought an expensive plan that got pro-rated for just that day.

It was an extremely successful promotion. [3] The final result was very interactive. It was amazing to watch the codes disappear in real-time. It was like a game: you had to be fast to enter the code, because the codes were being used so quickly.

All in all, I believe we made more money in people buying it anyways (despite the promotion) than it costed to serve it. And keep in mind we built the entire system in about 5 hours. And I'm not even a web developer. An actual web developer could have implemented this in an hour or two.

For reference here is the entire JavaScript powering the promo code website:

    var all_codes = new Firebase("https://qksms-promo.firebaseio.com/public_codes");
    all_codes.on('value', function(data){
      $('#remaining').text(data.numChildren());
      if (data.numChildren() === 0) {
        $('#status').text('No more codes!');
      }
    })

    all_codes.orderByValue().limitToFirst(12).on('child_added', function(data){
      $('#status').remove();

      var str = '<div class="code" id="';
      str = str.concat(data.key());
      str = str.concat('">');
      str = str.concat(data.key());
      str = str.concat('</div>');
      $('#wrapper').append(str);
      $('#'.concat(data.key())).hide().fadeIn(300);
    });

    all_codes.orderByValue().limitToFirst(12).on('child_removed', function(data){
      $('#'.concat(data.key())).slideUp(300, function(){
        $(this).remove();
      });
    });
[1] https://github.com/moezbhatti/qksms

[2] http://qklabs.com/qksms-promo/

[3] https://www.reddit.com/r/Android/comments/36eix7/dev_a_year_...


We got around 15,000 downloads that day (compared to the usual 300-500), and 10% converted to paid users


If successful this could lead reduction in developers needed in companies that use say AWS lambda.


> If successful this could lead reduction in developers needed in companies that use say AWS lambda.

How so ? to me it makes development even more complicated, cheaper relative to hosting fees maybe, but simpler ? I don't think so. What if I want to use language XYZ not supported by lambda ?


> What if I want to use language XYZ not supported by lambda ?

Such problems can apply to any ready made solution. In general, a vendor product may or may not suffice custom needs and a user is free to build one from scratch. Abstraction of major pain points that fits a wide variety of user needs, if handled properly, can pave the way for a competitive solution.


> cheaper relative to hosting fees maybe

Currently serverless is more expensive in the average case from what I gather, mainly since it's new and there's little competition.


If avg case is that you stay within an average load for long periods of time, then it'll likely be more expensive (given a certain floor of actual load - for in-dev apps its dirt cheap - you won't have many servers running doing nothing.


You can't. Isn't everyone using the same language simple? Convenient? Maybe not.


If history is a guide, I suspect it will just increase the size and complexity of apps. Its amazing what small teams with serverless architectures can accomplish nowadays.


> If history is a guide.

Could you elaborate on this ?

I agree the size and complexity could increase, but so could the number features in the apps. Since all such apps would be reusing the same code base, wouldn't this increase abstraction and reduce implementation hours ?


I think it's something like a big monolithic app is less complex than a n-tier client server app which is less complex than a microservices distributed app which could be less complex than a purely FaaS implemented app, yet to be determined.


I guess thats the whole point. More complexity overall, because you split the parts up more. The advantage being that you only scale the parts that need to scale.


How is this different then something like Heroku and their add-on ecosystem?


Heroku gives you servers, but manages them for you - so you have long running processes, for web or worker etc. Lambda allows you to have the process run for literally the lifecycle of the job (request, streaming job etc.). In fact, your job can't run for more than a few seconds or it is terminated. Heroku charges you for the hours your dynos run - but what if they're idle most of the time? You still pay for them - not with lambda.


It doesn't seem the author knows how network programming works. This is all high level abstraction and diagrams, but no simple comparison of what are the pro/cons, what are the constraints, etc.


Also see "Unhosted web apps": https://unhosted.org/


I'm using the following setup for one of my sites:

Client -> post -> API Gateway -> AWS Lambda -> upload JSON to S3

Client -> get -> S3

Works pretty good!


Is it cost effective? I mean roughly.


Very very cheap, as long as you're writing your lambdas as performant as possible we've costed our app at ~2c / user / month with heavy api / database / lambda usage


It seems you are estimating the costs per user. I think an estimation per request and perhaps compared with raw EC2 servers would make more sense. Anyway, it's encouraging to hear that's cost effective. My experience with managed/ server-less platforms is quite bad in terms of costs.


What if the web had the equivalent of 900 numbers. Something like http900://foo.bar.com


If you're looking for an open-source API Gateway (less lock-in) with Lambda Integration then Kong (https://github.com/Mashape/kong) could be a useful complementary tool for your serverless architecture.


I think you're missing the point of the 'serverless' part, as Kong still requires you to have a dedicated server to run it. One of the early stage benefits of the serverless architecture and AWS's api gateway is that you don't have to pay for resources that you don't consume - so if you've only got a small amount of customers, you don't have to pay for dedicated instances to be running.


imho serverless architecture doesn't mean that your app is without servers 100%. Backend still serverless, but everything on top (like an API gateway, load balancer, service discovery, etc) can have dedicated resources.


Fair enough - there are definitely use cases where I could see that making sense. I love the value lambda provides for early stage, when every penny counts.


The main problem I see here is the problem we run into with every hot new abstraction of a thing that some people find troublesome as developers.

Someone already mentioned virtual hosting. I'd like to bring up ORMs. There's this idea that you can forget about the datastore--abstract it away, manage it in code, whatever. You don't have to learn about databases, table structures, or SQL if you don't want to. Just think about your models and don't worry about the rest.

I'm not against ORMS and find them both amazing and useful in certain ways. What I am against are systems developed entirely by developers who refuse to think about these things. Especially when I have to come into the situation after they have left and do something genuinely unforeseeable, like provide reporting and analytics from the back end, integrate with a third-party system, or--god forbid--add new functionality.

What you're left with is a dysfunctional, slow, mess that's incredibly difficult to work with, a data layer that makes no sense unless you're working with that particular ORM, and no ability to tell exactly where in a quarter-million lines of code the exact logic you want to fix or modify actually is.

Yes, I'm quite certain that no one would recommend that anyone use serverless architecture that way. It's the wrong tool for the job. Just like ORMs are the wrong tool when performance is critical and you need to manage complexity for 30 services from the same data layer.

But someone is going to do it. In reality, probably lots of people will. Some CTOs will buy into the hype and hire teams of consultants to do it or hire their actual dev team full of serverless experts.

I'd echo what someone else said to the effect of abstractions are great so long as you understand what you're abstracting and know when it's a good idea to do so.

Viewed as a tool for accomplishing a small thing pretty quickly, it sounds cheap, fun, and exciting.

Viewed as a new way of taking shortcuts around quality infrastructure that's needed to support large, complex systems, I feel like it's probably a bad idea.

Given my attitudes above, it's probably not a surprise that I find the description a bit sketchy. I think calling it Serverless is a misnomer for all of the many good reasons people have pointed out in this thread. But I also question calling it architecture.

What I expect we will see in fairly short order is a collection of libraries or frameworks that resemble PhP not only in function (a service that pops in and out of existence on demand) but also a language-like form that evolves rather than being intentionally built--and evolves towards ease-of-beginner use rather than being guided by a coherent architecture.

And, in the grand scheme of things, that's fine with me. I just hope I never inherit a system like that.


This seems to be true for a lot of "new" tech.

I honestly think people should learn to build standard apps the standard way - (SQL) database, application layer, html / view layer -then learn to scale the parts as needed. Its well understood. There are good frameworks and tools available and plenty of historical examples of how to do things.

Everyone seems to want to jump on the shiny new way of doing things as soon as they come out. I often ask peoples reason for choosing their architecture - "thats how we do things these days" or "relational databases don't scale" seem to be the standard answers.





Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: