Hacker News new | comments | show | ask | jobs | submit login
Things I’ve learned using serverless (acloud.guru)
163 points by rmason 77 days ago | hide | past | web | favorite | 129 comments



I read this blog post assuming I'm actually going to learn a thing or to in terms of Best Practices in serverless paradigm, some ops/observability tricks and such.

It turned out to be a complete AWS advertisement as well as hand-waving to bunch of other blog posts without any good explanations. What makes me curious is don't these people actually ever read Hacker News and know what good technical blog posts look like (unless they're just bunch of paid evangelists writing blogs with catchy titles)?


acloud.guru is a name I recognize from some (halfway decent to be fair) Udemy classes I've taken a few years ago. Likely explains what you're seeing.


It seems like you've traded in a bunch of open source solutions, to a walled garden in AWS and amazon tools.


The trade-off is cost. The article even mentions they drove down the operational cost at least 70%. You can still run whatever open source library in the lambda that you need (still need to ask, is it with the extra bytes), but yea, you are betting big with AWS. GCE serverless is way behind right now.


I wonder how much they could save in operational cost, and how much in raw money it actually is, if they spent this time on trying to optimise cost without rewriting architecture. From my personal experience saving 70% cost on AWS is really not that difficult.


I would argue that this is a poor trade-off. Servers are generally cheap for most projects while labor is expensive.

When using percentages you always have to ask 70% of what?


This has been my experience. Serverless is frustrating beyond belief. I haven’t deployed to production yet so maybe we’ll realize benefits enough to warrant the frustration at that time but so far I have serious doubts.


You can fire at least half of the labor after launch and maintenance can be done by fewer people or ad-hoc contractors.


This might fly in some niche software product, but for anyone who wants to run a permanent software business this is a recipe for disaster.


Sure that's immediate per month savings.

What happens when/if Amazon changes their offering to something that makes your system incompatible overnight? What about your keys getting filched and you inadvertently run 100 GPU bitcoin clusters?

How much would you expend in doing an emergency mass migration somewhere else? Would your company even survive?

People who choose to use Amazon exclusive APIs will get bit. It's not an if, but when. I'm not saying "dont buy ec2 instances or s3 storage instances"... Those in the end are just VMs and storage that you can purchase elsewhere. But whom else runs "Lambda"? What is your migration plan if they they cancel your service/quit offering/not offer it?


What happens when/if Amazon changes their offering to something that makes your system incompatible overnight?

They wont, or at least historically have not. There's no guarantees however, but this seems like a low risk.

More likely is that AWS will either raise their prices (or are undercut by a competitor) such that it makes financial sense to migrate to a new platform.


I may be overstating the "API change overnight" issue, but your comment does not address the 'Lose API keys', or 'Banned from being a customer', or other types of events that would cause an org to lose service.

I remember something very similar happening to a FireBase customer, in which surprise billing and something occurred that caused them to go from $10/mo to $1600/mo. That's the class of "oh shit" I'm talking about.


It's a real concern with AWS. Dealt with an incident where we had a dev-ops full access api key accidentally get checked in to a public repo. Within a hour, there were hundreds of instances running 100% cpus (presumably a bitcoin farm) in our production account.

We didn't get charged for the work, though we did have to talk to Amazon rep to alert them of what had happened.

It's good architectural design (these days) not to marry yourself to your underlying platform. As a core system design, Lambda is worrisome for me for that vendor lock-in


If he actually had scale, he would not be saving in cost. Ironic for a blog post advertising "unlimited scale". There's little chance author has built anything that has scaled efficiently yet.


Nobody ever shows any love for Azure in these discussions.

Disclosure: I work on Project Riff at Pivotal.


I've used both azure functions and aws lambda in production environments. Azure functions feel rushed out the door with gotchas/problems around every corner, including major stability issues. Azure functions are mid transition between v1 and v2, with v1 becoming outdated with nuget version lockin and cluttered with gotchas, but v2 is plagued with stability problems and breaking changes happening every other month.

Aws lambdas have had more refinement done on them. For the time being i wouldn't recommend azure functions unless there's non technical motivations.


OK, I'll do it, although this one requires that you have a live Kubernetes cluster to run your functions on,

I haven't heard much about it other than that it is more friendly open code from the lovely people that brought us Deis and Helm:

https://github.com/Azure/brigade

Hey, I bet you've heard of this, it sounds like Riff is absolutely in the same space :D

I think for most small enterprises today it's not too much to ask that you have a Kubernetes cluster with autoscaling provisioned somewhere. I think in 2018 you're not serious if you don't have at least that (or something comparable, although I've heard "the war is over" and agree that people should just get comfortable already with the idea of K8S if they haven't yet)

There are enough managed offerings today that don't charge anything for masters, where you can simply push a button and get a cluster that is properly configured, and push another button to tear it down when you're done, or call an API and get the same effect.

I know that's not really "serverless" now, and it's all about the cost of running computers in the cloud on a 24/7 basis, so tell me if you've heard this one before...

I've never succeeded in standing up a Kubernetes cluster with ASG for workers that will scale all the way down to zero when demand for worker nodes evaporates for a long enough period of time (10-30 mins?). Admittedly I've never spent that much time trying at it either... I am privileged to have some real physical computers plugged into the wall that I don't have to turn off, so I guess I just don't have to think that way.

There's just not any technical reason that won't work though, is there? You'll need the master(s) to hang around, so it's possible to notice Pending pods and scale back up when the demand returns, right?

(So why am I not seeing this capability advertised or demo from any managed Kubernetes provider offerings, is it really just simple economic answer that given the pricing model of no-cost masters, they don't make any money off you during a period of time that you aren't running any worker nodes?)


> Hey, I bet you've heard of this, it sounds like Riff is absolutely in the same space :D

I have, and I admire a lot of the work Deis folk have been doing at Microsoft. I have different opinions about the future looks, but I could be wrong. And I'm not the only member of the riff team.

In terms of "scale to zero" for workers, I think your "two whys" need is containers on-demand, not workers on-demand. That need is going to be met by the various virtual kubelet efforts underway. Azure have been out front on this, actually, with AWS Fargate coming hot on their heels. I expect that as GKE matures it will hit this too.

As we move towards "five whys", it turns out that we are essentially re-treading the path that Cloud Foundry got to years ago (and Heroku before that): focus on making it easy to run code.

Containers are in themselves an almost-irrelevant implementation detail 99% of devs should never have to care about, just in the same way that most of us don't think about mallocs any more.

I call this the Onsi Haiku Test, after the `cf push` haiku that Onsi Fakhouri gave at a conference a few years back:

    Here is my source code.
    Run it on the cloud for me.
    I do not care how.
And coming into riff from the Cloud Foundry universe, one of my personal agenda items is that riff should pass the Onsi Haiku Test with flying colours.


I would love to hear more of this kind of talk.

I'd really like to get you in the room with a couple of architects and technology leadership in my office. (No seriously, maybe zoom room.)

I'm on the kubernetes train, but they are mostly still hoping on Fargate, having never made this leap, and I have this feeling that I never would have got into the k8s world without the kind of help I got from Deis.

> Containers are in themselves an almost-irrelevant implementation detail 99% of devs should never have to care about

Couldn't agree more. Deis made this easy for me before it was on Kubernetes (CoreOS and Fleet), and when I was finally convinced to leave that stack behind, Deis made it easy for me again to do the same on Kubernetes. I'm the biggest fan of Deis anywhere.

(I've felt the loss of the Deis Workflow maintainers so badly that I'm personally working on the team to fork Deis! But the bus factor is way too high for my place of work, which is a university; they want something they can understand and that they can support or pay a vendor to support if I am not around anymore. That won't stop me, but it also means I need to keep an ear to the ground for something we can use to start doing CI/CD here.)

The technical leaders in my place of work, have already made the leap to AWS, but are just testing the waters of eg. spot market and serverless (lambda) to try to get the cost and reliability benefits to start to materialize, and they would really like to skip containers altogether and start building everything for Lambda. I know enough to say "whoa there Icarus that's no way to reach Lift-and-Shift" and pretty sure from my experience you should start lower (but still with some higher abstraction than plain old Docker containers, and also not Compose or Fargate.)

So I'm in a pickle because Deis is no longer offering support for end users, otherwise that's probably what I'd still be recommending.

I've been looking at possible replacements like Cloud Foundry (and Convox, and Empire) but your haiku hits me right in the feels and is the really important message I need to deliver. I am developing an application right now and I need the kind of devops machinery and support that is appropriate for that kind of effort in 2018

(and I definitely don't want to be embroiled in exploratory project to implement containers for the whole organization some time in the next 5 years, at least not before we can get something out the door for our customers across campus...)

I just don't think we do enough software development to justify spending on something like PCF but I'm not the one who would need to be convinced, either!


If you're using buildpacks, Cloud Foundry is the place to be. I obviously feel like PCF is the bee's knees, but there are OSS alternatives.

You can run OSS Cloud Foundry (now called Cloud Foundry Application Runtime or CFAR) using BOSH and cf-deployment. You can also run Kubernetes with the same operator tools if you use CF Container Runtime (CFCR), for people who need that capability.

SUSE sponsor an OSS GUI called Stratos.

For CI/CD, I am alllll about Concourse. Automation-as-a-Service is a secret gamechanger.

My work email is in my profile if you'd like me to hop on a call with anyone.


Hey, I just watched the Riff video and I'm a little blown away! Can't believe you've been downvoted


Azure: Fix the API Gateway. And the "managed database". Then I might go back to using you.


To be fair everything is a walled garden. Even open source solutions. You still need to have an infrastructure to run your code and unless you want to build your own servers you still need to pay AWS/Azure/GCP/DigitalOcean/etc for renting that infra. So I really don't see what's the problem of using something like AWS exclusively. If anything, it makes your life easier.


Note that this discussion is about AWS Lambda, not AWS generally.

There are upsides and downsides to using AWS Lambda, but characterizing it as a walled garden is pretty reasonable. That's not the same as code you can run on any Linux server.


You really class DO with those other guys for lock-in?

Or is that the part you are blind to?


I mean. Digital Ocean provides infrastructure too. Nothing is stopping you to run something like OpenFaas on DO.


Yeah, everything is a monopoly. Even a free market. You still need to buy stuff!

Wut?


Agreeing here. If you can't run your code without AWS or whatever your vendor is, you got bigger problems.

I'm talking about the core business code. Ops is important but replaceable.


To be fair the serverless framework supports several types of cloud solutions besides AWS — but I’m not so sure how easily one can switch mid-project.


Did you exchange a walk-on part in a war for a lead role in a cage?


Wish you were here to tell you that the Pink Floyd reference doesn't quite fit.


You say it like it’s a bad thing.

I’m betting on Amazon being in business at least as long as IBM. The benefits far outweigh the costs of having to port my code in 100+ years. If the machines aren’t sentient by then...


I hope you enjoy lock-in pricing...


> But RDMS systems are just another monolith — failing to scale well and they don’t support the idea of organically evolving agile systems.

RDBMS systems could handle billions of complex queries per day in the 1990s (ie last century) and ship with an entire language designed to allow you to safely, incrementally evolve your data model.

MySQL and ORMs are not the limits of that universe.


20 years in this field has taught me that (1) we move on to new technologies more often because we don't understand the current ones than because the current ones are flawed, (2) we fail to weigh the costs and risks and setback of moving to new technologies, and (3) we don't realize that we're conserving overall complexity and flawedness, just moving it around.


> we don't realize that we're conserving overall complexity and flawedness, just moving it around.

I broadly think so too: https://news.ycombinator.com/item?id=5262556

Though the shifting sands of the economics of compute, disk and network tend to favour this or that approach as time goes on. So while FaaSes are just CGI, they aren't just CGI; but we can at least try to be non-doomed with regards to repetition of history.


(4) new tech is sexy, old tech is all cranky old guys, like 30+, yuk.


Problem is that these young whippersnappers don't even realize that there is nothing "new" about this tech stack. You can recreate the new sexy with 30 year old tech. You are talking about a load balancer that is redirecting requests to individual cgi-scripts based on the url. They have just given up knowing how to setup and configure physical servers.


> They have just given up knowing how to setup and configure physical servers.

Or they know how complex and error-prone it can be, and decided to spend their time on other things.

It’s good to know how that stuff works, how to configure a LB, install nginx, rack a server... the way being able to do long divisions by hand is good to know. But when you’re crunching numbers all day, it’s easier to use a calculator.


More like learning to use a slide rule :) You still have to learn how to setup a load balancer (API Gateway), firewall (IAM, API Gateway), Server Config (CloudFormation, API Gateway, S3, etc) and so on. And those are Vendor specific. Move to Azure or GC and you have a whole new set of "serverless" servers to learn to configure. About the only thing you have really given up is knowing where your machines are physically.


You've also given up having to buy machines, predict resources, over provisioning to meet peak demand, server maintenance for databases, caching, web servers, etc.

If I want to load test something for a day and spin up 20 EC2 instances and spin them down, I can do that with a script. Then I can see where my bottlenecks are and provision instances, load balancers, increased disk IOPS, etc. as appropriate and tear everything down I don't need.


Apples to Apples. Your 20 EC2 instances are just 20 VPS at any VPS provider located geographically where you want to deploy them. Also with a script. You still have not gained anything from your vendor lock-in. IaaS has been around since the 90s.


And what about the load balancers, the databases instances, the queuing system, the global CDN, the caching servers, etc? I could script my own strategy for autoscaling that integrates with metrics from the running instances, but why would I when I click on a few buttons and have autoscaling based on CloudWatch metrics, the size of the SQS queue, CPU usage etc?

But as far as "vendor lock-in", it's like developers wrapping database access up into a repository pattern just in case we want to change databases. In the real world, hardly anyone takes on massive infrastructure changes to save a few dollars.

On the other hand, there are frameworks like Serverless and Terraform to build infrastructure in a cloud vendor neutral method.


Again each piece you have named can be done in an "older" tech which was the original point of this thread. Every few years the tech industry reinvents the same tech and a new generation of developers think mana has fallen from heaven, when in truth it is the same as the last round with new buzzwords attached.


Yes it can be done but how efficiently? I couldn't call up the netops guys to buy and provision all of the resources I needed to test scalability within the time it takes me to setup a cloud formation script.

In 2008 we had racks of servers we were leasing and that were sitting idle most of the time just so we could stress test our Windows Mobile apps.

I've been developing professionally for 20 years and 10 years before that as a hobbyist. I know what a pain it is to get hardware for what you need when your company has to manage all of its own infrastructure.

Just setting up EC2 instances and installing software on them doesn't reduce the pain by much. Sure you're cutting down on your capex but you still end up babysitting servers or doing the "undifferentiated heavy lifting", I would much rather stand up a bunch of RDS instances.

As far as serverless, why manage servers at all when you can just either create a Lambda function for the lightweight stuff or deploy Docker images with Fargate? That's just one less thing to manage and you can concentrate on development


I am not disagreeing with you that it is easier than deploying your own infrastructure. .. But, again back to my original point, lamda functions are not anything new. They are simply an http app that is "typically" responding to a single route. The API gateway is simply a configured proxy routing the "public" routes to your various "functions".

All the parts are easily replaced or scaled however you see fit. Your function can be in any language that can respond to http on any platform you want. You can put whatever proxy you want in front to define your routes. You can get as simple or complicated as you want.

Serverless is not serverless, you are just abstracted away from it.

[EDIT] I would add that personally I would spin you a cluster of Flynn on Digital Ocean :)


With serverless, you automatically get scale for each endpoint individually, not just the entire app. If for some reason you get an unexpected amount of GET request to POST requests, just the GET lambda will scale. If I tried to do the same with EC2 instances behind an ELB, I wouldn’t get the same level of granularity.

And lambdas aren’t just about responding to http requests, they are also used to respond to messages, CloudWatch events, files being written to S3, etc. I would hate to have to stand up servers for that. Even if you don’t want to get “locked in” to Lambda, why not serverless Docker?


After reading that I'm so happy that I develop traditional Flask and Rails applications with server side templates and tiny bits of JS thrown in when necessary.

I'm all for moving forward and using new stuff if it makes my life better, but from the looks of it, Serverless is still many years away before discussions like that can even take place.


I’m jealous. I left Rails behind a couple years ago because I wanted to learn new things and not get stuck on a single stack. I’m currently working with Typescript/Node on a serverless framework on AWS. It’s got all the buzzwords but dear lord it’s complicated. I would not recommend it.


I find TypeScript to be an absolute god send in medium/larger projects. It's more complex to get started, but to understand and maintain it's way better.


I like Typescript a lot, especially compared to plain JS. AWS lambda, on the other hand, requires very special use cases in order to justify the complexity and learning curve, IMO. It’s not great as a general programming environment.


> Serverless is still many years away

I strongly disagree, yes it is early and some of the tools notably debugging and logs aren't anywhere near the level they need to be.

I'm developing an app with serverless and this article really resonated with me about my struggles. I think once Aurora Serverless launches allowing developers who need a relational database to easily move on the platform you will see rapid growth with serverless.

Why? Because it makes so much sense. Why worry about managing servers or scaling? Why continually write the same boilerplate glue code over and over again?

I know that I'd rather write a configuration file calling best of the best components over writing code. Don't get me wrong I like writing code but I'd rather concentrate on the business logic.


> I'm developing an app with serverless

Lambda? If so, what benefit do you see over running AWS Container Service?

I ask because I've tried both. Serverless frameworks (AWS Lambda/AzFunc) were horrific. I picked up Docker as an answer and never looked back.

Others in my company are abandoning serverless after seeing our success. Turns out being easily able to run things locally, very similar to production, is very important. We have no problem concentrating on the business logic AND keeping flexibility.


It really depends on what you are doing I guess. Sounds like you enjoy the dev aspect of Docker, which tells me you are doing more that just running a function.


You can run azure functions locally no problem.

I've loved my time with serverless framework and aws lambda.


You disagree, yet readily admit the toolchain is lacking.

Parents point is saying exactly that - filling out the toolchain to a meaningful completeness is going to take a couple of years.

This isn’t a ding on Lamdas - just a reality - theres a huge backlog of capabilities to catch up on.


Serverless has nothing to do with frontend vs backend templating. I use Lambda and do server rendered pages. I just don't have to manage the server, it's invisible to me.


AWS have rebranded everything they can as "serverless" and are pushing a definition of serverless which is, approximately, "it's AWS". I cannot blame them. I would too, it's a phenomenal honeypot.

I prefer to use the label "FaaS", because it fits clearly into an existing taxonomy and doesn't spark silly arguments about what is, and what is not, serverless or serverful.

Disclosure: I work for Pivotal on a FaaS project.


You can still use serverless while maintaining a traditional architecture. I have a LAMP app, but I use Lambda + API Gateway to handle a multitude of tasks. The main benefit I have seen so far is that I get to work on complex ideas without worrying too much if this is going to work or scale on my main stack.


> Serverless is still many years away

This could imply that serverless "just need to improved" to a point is good for most.

I argue will NEVER* be good, not matter what kind of glue we put on top of it.

Because, I think, not matter what you do or how hard you try, is the most complex way to do a software project.

*Never: probably not in the next millenium....


The number of tech problems that exist today and will not be solved or made obsolete within the next 1000 years: 0


What is complex about writing a function?


Nothing. However no "serverless" solution is only writing a function. (throw in some IAM roles, database connection pooling, API gateway, etc)


Whoa, whoa, whoa. If you're building a serverless app, you don't start with Flask. You start with the lambda and plain old Python. Seriously, what he wrote basically said he tried to build to a construct which lambda isn't meant to directly support and then had all sorts of problems.

Stop trying to write a full server and then map it to lambda. Start with lambda and map it to your service. There, done. That's all you need to know.


Yep, the whole article looks like building a strawman against Python. This mentality of using "cool" frameworks and join the "cool" javascript kids (isn't that an oxymoron?) reeks of the hipsteresque mentality of the whole javascript community...just wait until the next `left_pad`.


Blame Zappa. It’s a very cool framework, but it promises serverless for frameworks including Django and Flask. It may be too leaky an abstraction for somebody trying to migrate an existing monolith to serverless or semi-serverless.


> "And now we no longer worry about Python version 2 or 3 (is it ever upgrading?)"

I just threw out my back cringing.


Yep. He uses a 2017 version of JavaScript but complains about a 9 year old version of Python.


Some of these observations are okay, but some of them border on dangerous or not fully considered. Python is an absolutely fine tool in your tool belt for serverless. I use it along with JavaScript all the time -- depending on which has better libraries or makes more sense for a given requirement. Quite honestly, the best part about serverless is that you can generally pick and choose which tool is right for the job up to the language in a far more compositional manner than using more traditional distributed SOA platforms.

Dynamo is pretty good, but its value starts to dwindle when you want to be able to do local development, possibly even without a network connection. And most of the traditional ways of interacting with the data layer aren't really available. So for instance, you're not doing to be able to use an ORM for a simple application with Dynamo, which means writing a lot of your stuff from scratch.

So given that pragmatically, you probably still want a database, you're going to run into a position where you can't possibly be 100% "serverless". A persistent database connection is a good thing, and one where you can control the number of connections is an absolute requirement at scale. Even if you can tweak your lambdas just right to accommodate your DB's maximum number of connections, you're needless assuming the cost of opening one of those connections on each invocation of your lambda.

My recommendation is to use serverless where it's really well suited, which is for distributed, event-driven processing. Your data backend becomes an RPC that can help work with the top of the funnel to map and distributed well-populated messages through your system. For this, I use protocol buffers, and base-64 encode their serialized bytes into an SNS topic. Depending on message size, your mileage may vary here.

You can still use some of the more clever AWS offerings to reduce your dependence on some fixed, running server. For instance, Fargate may make it possible for you to run a persistent RPC server for managing read and write requests to your RPC, which is maintaining a well-optimized connection pool with your database.

I agree with using JWT for authentication. I agreed with it when stateless authentication pre-dated the service offerings that made serverless a possible paradigm. Serverless generally requires stateless, but you can still reap the benefits from doing the same thing with servers.

Hosting a static SPA in S3 I think is one of the less challenging arguments in this blog post and has been a good practice for getting on five years now. Vue isn't necessarily part of it, and marrying the framework with the choice of hosting I think muddies the waters on what's good advice and what's just an opinion.

In all introducing serverless technologies to your platform is a great way to significantly minimize your infrastructure costs. t comes at a similar price as building any other SOA -- an increase in the cost of maintenance as debuggability becomes more difficult, and the network becomes more complex. So it's important to think critically about what parts you should take and what parts you should leave or just defer when it comes to your architecture and your business requirements.


> But RDMS systems are just another monolith — failing to scale well and they don’t support the idea of organically evolving agile systems.

What the hell does this mean? Completely unsubstantiated nonsense.

Hardly anyone needs to scale Postgres past 1B records and 50K QPS, which can be achieved on a relatively affordable pair of synchronously-replicated boxes.

This guy clearly doesn't know anything beyond year 1 basics and the post reads like a fatal overdose of Kool-Aid.


What this seems to be saying is that serverless is great if you are doing javascript-heavy SPA's, otherwise not so much. Serverless is good if you don't actually need to talk to the server.


I didn't get that from this article at all. It didn't really seem to say anything about backend stuff.

I build huge serverless backend applications, and IMO it's been fantastic. There is a learning curve, because your application's execution environment is pretty different from traditional applications, but it's allowed us to build a remarkably complex and scalable application, very quickly. And it's been pretty maintainable too.


Can you elaborate on what your application does, what’s a basic flow of a request, and finally, what are the high-level steps you take to ship a new feature or API endpoint?


I can't elaborate on what the application does, but I can talk about high-level architecture and development.

Basically we have an API that performs asynchronous data analysis and processing. Our fronting service receives a request, writes some metadata, and places the request in a "queue", which is picked up by a backend poller that starts a workflow execution, which orchestrates the fulfillment of the request. This is all serverless (AWS tech...API Gateway, Lambda, Step Functions, S3, DynamoDB, CloudFormation, CloudWatch, etc.). Serverless makes deployments much easier, since we can version our Lambdas and State Machines using CloudFormation, and have many different versions running at the same time (not fun if you're managing your own hardware!). We have a CD pipeline that builds code changes, deploys them to a test account, and runs integration tests. We use CloudWatch Alarms to monitor production and alert us of any issues.

We have some development scripts for pushing code changes with CloudFormation for testing during development. We use that to develop, then once we check the code in, it works its way through our pipeline and into production.


Totally agree. I used Google Cloud Functions + Google Cloud Storage to deploy a single function “app”. We replaced a SaaS that was a UI for recurring plan signups over Stripe. It started costing us $500/Mo. I replaced it with 16 hours dev+testing.

It turned out really well and am very happy with it thus far. I expect maintenance to be minimal thanks to serverless.

Though I don’t think it’d work as well for an app with 100+ database tables. Definitely a middle ground in there.


Just curious, what kinds of apps use 100+ database tables? I don't know too much about this stuff.


We offer a SaaS product that helps manage pet services businesses. Think CRM+POS+ERP all mashed together, targeted at a niche. Our app is highly configurable by each business. It turns out these businesses are pretty complex :). We’re at 152 production tables today.


Our core banking app has 1000+ tables.


This sub-thread is fascinating. On the pet service business, it sounds like there might be more tables due to some kind of object polymorphism on a per customer basis? Are there any other "per-whatever" expansion factors that are multiplying your table count?


A CRM+POS+ERP is pretty complex by itself, and being more configurable means more stuff is in the database vs the code.

I work with a platform that also does CRM/POS/ERP (plus a bunch more), and the products alone have over 10 tables for describing them: the base products, their variants, a list of generic variant attributes, a table detailing which variants have which attributes, a table for specifying the values that those variants have of those attributes, the product categories, two tables for configuring the taxes applied to each product (on purchases and sales), the product images, the list of suppliers.

We're already on 10 and we haven't even used those products for anything (stocking, selling, invoicing, purchasing, etc, etc).


Two paragraphs in and I want to slap this guy

"That’s quaint" ... ugh. You're quaint.


There's more few paragraphs down:

> And now we no longer worry about Python version 2 or 3 (is it ever upgrading?)


As if Node doesn't have multiple versions that a nvm* is exists and widely used (and I really like node)

*Node version manager


It's much different tho. Node is is backwards compatible, python 3 -> 2 is definitely not. And python 3 isn't exactly new ...


Node’s policy is that major versions may introduce backwards compatibility, and Node 4.0 did introduce some.

Just like Python.


Did you mean incompatibility?


Yes indeed


It is a really obnoxious article, both in tone and content, and I have flagged it accordingly :)


> old-time request-response style of a website with a session managed by the server

Breaking news: your app still does this. You just moved the responsibility of request/response elsewhere.


The same blog has a great post on cold-start times showing Python as the clear winner:

  https://read.acloud.guru/does-coding-language-memory-or-package-size-affect-cold-starts-of-aws-lambda-a15e26d12c76
Cold starts are a real issue and while warming via pings can mitigate the issue, you will still run into cold starts when demand scales up.

Java with Spring is really difficult with AWS Lambda because of the slow cold starts. Five seconds for a cold start is unacceptable for many applications.


"AWS Certified Technologist" == "AWS Lock-In Specialist"


These lessons match muni experience building https://github.com/nzoschke/gofaas

Except I opt for Golang and the Serverless Application Model (SAM).

Go let’s you ditch even more stuff by cross compiling binaries.

And SAM is a framework built by AWS and vastly simplifies the config files.


Lesson 7: when you reach webscale(TM) it gets expensive AF


Do you have some example numbers here?


https://servers.lol/ is a one resource at a very high level to see if EC2s or Lambda is a good fit for the use-case you are looking at.

The site gives a cost estimate and an application score comments (latency, burstiness, function execution time)


That is an awesome resource, my friend.


The article mentions they used Auth0 and Cognito. I spent quite a bit of time researching Cognito (https://github.com/baus/cognito-strap), but I never figured out how to recognize which user is logged in when using federated identities. I found the docs to be misleading or wrong in many cases.

I'm curious if anyone is actually using Cognito in production. It feels like an alpha product to me.


It's weird, with all the time and money and brainpower invested over the last 10 years, I still find Heroku to be the lowest maintenance.


Agreed. Serverless is not free, or even cheap for that matter. It’s an entirely new skillset that comes with a million new things to learn and worry about. No thanks.


Well it's a thing. But the optimization problem needs to be stated clearly. And I think it is the solution to some optimization problems. Some analysis around that would be interesting.


Heroku does everything every new platform hopes to achieve and has been doing it for years…


That was interesting. Two questions:

1) Is there a problem with python, or a problem with flask? Isn't this what chalice is for? https://github.com/aws/chalice

2) How are you dealing with cold starts?

I learned stuff from this post, but I would have learned more with some background about the workload etc so I could reason about what generalizes and what doesn't.


Not the author, but you can set up cloudwatch to hit your lambdas at defined intervals. I set up my lambdas that are accessed through api gateway with a special header to check. If the header is there with the correct value, it just returns. Most keep-alive checks are in the 10-20ms range, and since charges in increments of 100ms, it's the lowest possible tier for getting charged.


We have done this. The problem is that concurrent requests have to warm a new instance - so if your concurrent workload increases, newcomers face cold starts. Worth noting that we are more worried about user experience from slow returns than price.

Edit: forgot to say thanks for suggestion! Also, here is a related article: https://hackernoon.com/im-afraid-you-re-thinking-about-aws-l...


That's an interesting problem. We don't get many concurrent requests and if we do, well then it's not a huge deal.

How many instances do you want running? You could set up a separate keep alive path that sends another request to the lambda, with a variable on how deep into the keep alive request 'recursion' and break out if you deep enough. Does that make sense? Super weird and just off the top of my head.

edit: this isn't a good solution either because if you have a lambda kicking off 4 other lambdas because you want 5 running, and someone makes a request well then you still haven't warmed up that 6th and your 5 lambdas are running the keep warm code...


If I understand your suggestion right, it's to heartbeat concurrently to force more warm instances. We have played with that, but spikes are spikes - the most interesting ones defy expectations. As with many apps, the conditions that make us spike make performance more important, not less.

Just found the same author as OP with a clever solution here: https://read.acloud.guru/cold-starting-lambdas-2c663055589e

Having the app pre warm instances on a per-user basis is super cool -- for user-driven workloads like web servers. To make matters worse, we are serving an API that takes hits from third party streams -- so our concurrency is based on their client behavior, not something we can easily link to a session scope, like users. Tricky!


Yes. That's what I mean. The per user basis does sound interesting.

Sometimes though, you can't force a square peg in a round hole. I dislike server maintenance but docker is a decent alternative to lambdas if you can absorb the extra cost.


Agreed, I think that's the state of the art: if variable concurrency is important, manage your own spare capacity. But I expect AWS and other providers will some day let us pay for reserved capacity without managing it, and I can't wait.


Yeah it seems a like a lot of resources are wasted on useless pings.


It does. But if your endpoint is so inactive that it sits idle most of the time, having it on a server/ec2 instance means you are paying 24/7 for it to sit there not doing anything. You could argue that it's not that much different to pay to keep the lambda warm vs paying for a server to be idle half the time.


That was really great. It would have benefited from discussing the type of apps they were building (estimated traffic, etc).

I'm curious whether the 70-90% savings include dev time as well?


Lesson 8: Serverless isn't the solution for everything / everyone.


If you're curious what cost savings AWS Lambda may provide, here's a handy calculator: https://servers.lol


Any comments on the statement: Azure functions is extremely cheap to use? ( Work related and new @ Azure outside of appservices and VM part).

Also looking for any gotchas, someone mentioned stability/compatibility issues in another post


There's no such thing as serverless, just someone else's server.


There's no such thing as the Internet, just some else's network.


In my opinion, serverless is not there yet for large scale latency-sensitive use-cases (where they cannot be hidden by UI tricks). Startup time of lambda runtime (cold container and then the language runtime) is high for web use-cases where the tail latency of multiple seconds cannot be tolerated.

Lambda serverless is really good if you have low RPS and want to pay-for-use due to the low RPS ( prototypes, small production apps, cron-jobs, regular scheduled events, compute intensive - image processing jobs)


I'm surprised there is no mention of all the downsides of JWT. It's good if you need to scale infinitely but a total pain if you ever have to invalidate specific tokens.


There are zounds of languages which compile to javascript so it is definitely not the only option. With the advent of WASM this will likely get better so we can finally ditch this abomination.


Can't believe I've gone this long (2 years) without knowing about the `sls logs` command. It's life-changing.


Why does this guy think that JWT tokens secure against CSRF? They're unrelated. This scares me. I want to know what projects he works on so I can avoid them. Not knowing something is insecure is one thing, but knowing that it's insecure scares me.


Classic post where you jump from one tech to another and completely shit all over the stuff you were previously using. There’s hardly a real tangible difference between express and flask. They both do routing and turn requests into responses. When folks make such naive and blanket statements as they do in articles like this it’s impossible to respect any of it.

I feel bad for this team and the future they’re going to face with the poor decision making at the top.


I'm sure you have a point, but making it in the form of a snarky dismissal breaks the site guidelines: https://news.ycombinator.com/newsguidelines.html. Any good effect from being right is drowned out by the bad effect of being a jerk.

Maybe someone who makes poorer technical choices than you doesn't deserve respect—that seems dubious, but we can argue about it. The community you're posting to, however, certainly does, and by posting like this you're not only disrespecting it but destroying it.

HN is a large, diffuse online community, so the bonds here are inevitably weak. Agitating snark acts as a solvent on those bonds, making the community less cohesive. This is exactly the opposite direction to the one we need. All the default forces already point that way; please don't make them worse.


I’ve been a member of this community a very long time. When content like this makes it to the homepage, it suggests the community thinks it’s good. Part of my responsibility as a member of this community is to try and point out when that is wrong. This post is full of misleading information.

I never said I didn’t respect the person. I said I didn’t respect the information in the post. I really can’t imagine how my remarks suggest that I’m a jerk.

Until I got flagged this was the top comment on the thread. The community seems to agree with my remarks.

I stand by them.


The issue isn't the corrective information in your post (sentences 2 and 3). That's great. The issue is the snark and name-calling (and borderline personal attack) in the rest, which is what I referred to as being a jerk. I'm not saying and don't think that you're a jerk; it's an unintentional side effect, but one we all need to guard against. As you know, HN is trying to be a forum where people post civilly and thoughtfully.

You can't judge this by upvotes. Indignation and snark often get heavily upvoted. That's a bug in the voting system, not an indication of comment quality.


I also felt that their main motivation was that they attended a seminar about serverless in late 2017, rather than having an actual problem that required serverless as a solution.

I wouldnt worry about the devs in their team so much, it’s great resume building, but I feel for the rest of the company.

shruubi 76 days ago [flagged]

My god, This article is written like the author has just joined a cult and is about two more claims about serverless being the high-exalted away from drinking the cool-aide.

I mean, it's wonderful that you went to a conference and was able to take some interesting lessons that you could apply to your product, but coming back and dropping your entire platform to rebuild in a new language just to use the fancy new things you learnt is utterly insane.

But hey, Kubecon has just finished, so I look forward to the upcoming article "How I rebuilt everything around Containers", should be thrilling to hear about how you tore up your product and rebuilt it a third time based upon some cool conference talks.


Please see https://news.ycombinator.com/item?id=17007019 and https://news.ycombinator.com/newsguidelines.html and don't post like this here, regardless of how right you are and how ignorant someone else may be.

Comments like these damage HN far more than a weak article.


Thanks. You spared me some typing, that's what I wanted to write as well.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: