Hacker News new | comments | show | ask | jobs | submit login
Serverless Aurora (serverless.com)
114 points by abd12 76 days ago | hide | past | web | favorite | 62 comments



Serverless is a bad idea, it was designed to serve big infrastructure providers as a way to increase their lock-in factor but the benefits to software developers is negative.

Serverless is simple but opaque; it's great for making demo apps but not ideal for real production systems. It narrows down integration possibilities, complicates large-scale development and deployment, and often, it makes it more difficult to keep track of backend errors... In addition to this, it makes you fully dependent on a single infrastructure provider; it takes away all your leverage when negotiating hosting costs.

The control panels of some of these services have become so complex and elaborate that you'll be wishing that you could just SSH into the server directly.


I built a fairly large system using the Serverless "framework" on AWS. It's been running for over 8 months now with zero maintenance and zero service disruptions, with an ingress of ~50M events per day and providing a sleek ReactJS/SemanticUI frontend that the users seem to really enjoy. Being a side project, it's been relatively stress-free.

While the application is still very profitable, the cost is ~10x its implementation on traditional servers. I'm not going to argue the pro's and con's, just providing some numbers. At this point, we're working to optimize the original stack in order to reduce costs.

I will say though...honestly, working with Serverless and the AWS stack is a very pleasant experience.


Always great to see numbers. The running cost is 10X a typical deployment. Does that include any time you would have spent patching traditional servers?

Can you talk about the cost of your development time (one of the premises of serverless is increased developer productivity)?

Again, thanks for the real world numbers--always nice to see them.


The cost difference is purely based on hosting a similar stack on something like dedicated servers with OVH. However, as you would imagine, it would be far less robust unless we had a team of DevOps to manage it, and clearly that's where a big cost comes in. Although, managing a somewhat similar stack for my full-time gig, it's not terribly time consuming.

Development time was roughly the same: frontend work was identical to a standard enterprise-grade SPA app and backend is all Lambda functions, which would've been implemented similarly on, for example, NodeJS and Express (which is what as done anyway for testing).

The primary cost (~60%) is due to the high volume of API Gateway requests. (Edit: see below comment for reason and plans to optimize)

Edit:

A HUGE time saving comes from being able to orchestrate AWS infrastructure easily via API and is well worth the premium in most cases. Our frontend application performs many automated management tasks of AWS resources such as S3, CloudFront, Lambda, Route53, etc... This of course does not directly relate to "Serverless," as it may be done by any application able to make API calls to AWS, however just the fact that it's possible for these resources is very appealing to a multi-hat developer with limited time on a side project.


I'd like to hear more about it too if you don't mind. I always figured the latency associated with starting up containers (i.e. what Lambda uses under the covers) would render user-facing apps unusable. Totally get it for batch processing, one-off events etc though.


We use Lambda for our event processing pipeline and also for our GraphQL API consumed by our clients (frontend management application, mobile applications, etc).

The GraphQL API is backed by DynamoDB and is served via API Gateway. The frontend was built using ReactJS and Relay. I am currently seeing anywhere from 200-450ms per GraphQL request on the frontend, which is more than acceptable as it's just a management frontend application...and of course it may be optimized using techniques such as prerendering, caching, etc...but these applications being super optimized is not a requirement.


Details on this would be great. We do a lot of serverless for content publishing and 3rd party api integration work... I’d love to hear more of how others use it at scale.

I can confirm a couple things. It seems like we spend a bit more time getting things to work as expected.

Troubleshooting can get expensive given the system disappears.

Telemetry on the lambdas needs to improve. Finding what’s eating time gets tough when you are looking to optimize. (A lot baked in to this statement but getting 800ms of compute to 400ms is sometimes important whilst staying on lambda)


Unfortunately I cannot provice much detail about the nature of the application, however I can say that we collect and process various types of events and perform automated actions based on them. The ingress begins at API Gateway, runs through a pipeline of Lambda functions, ends up in a Kinesis firehose, and ultimately in S3 for further processing. The primary cost (~60%) comes from API Gateway acting as the event ingress point, which was a design mistake...though at the time it did make things easier.

We're currently working on migrating the event collectors to a pure CloudFront based solution (inspired by SnowPlow) where the events will be submitted to CloudFront via signed GET requests and the CF logs will be streamed to the same Lambda-based processing pipeline. Doing this will eliminate almost all of the overhead of API Gateway (still required for certain events that cannot be collected via HTTP GET).


Have you evaluated x-ray for performance tuning? http://docs.aws.amazon.com/lambda/latest/dg/lambda-x-ray.htm...


Mind sharing more details on the project? I'm always curious what sorts of ideas are good fits for serverless versus "traditional" web apps.


See above comments by me. Unfortunately I cannot go in to much detail regarding the nature of the application, but I've shared some extra details.


Serverless is (way?) over-hyped, but has valid use cases I think.

As an example, for a tiny side-project, I don't care about the markup or lock-in, I just want my code to run an unknown, small number of times per day for a cost-effective total price and a low cognitive and support overhead on me.


Sure, but I've found that even some use cases that seem perfect are in fact not as good as you think.

For example, I used Amazon Elastic Transcoder to do video compression/transcoding at a previous company thinking that we would save a lot of time but the API wasn't great; a lot of the default settings weren't right for our use case and the API and notification flow was way more complicated than it needed to be and prone to errors (due the the sheer complexity of input parameters it could handle).

At one point, we had a problem receiving SNS notifications from the transcoder and we didn't know what was causing it because we couldn't effectively access/browse the error log and was was exposed to us wasn't detailed enough. Basically the visibility in Amazon-land wasn't great.

We had to pay Amazon for special support so that they would tell us what the error was (after weeks of back-and-forth).

Now that it's all running, it's fine; so long as we don't need to change anything and don't need to start scaling up/sharding the pipelines; no sure how to do this through their UI.


Not sure why this is being downvoted? The author seems to have faced a problem with the abstraction of these services and has described it.

If anyone thinks that is wrong or has an argument supporting it or against it, just point it out. I don't think this is worthy of replyless downvotes.


If you’re still in the market for a transcoding service but fancy a better API, consider Transloadit. Disclaimer: I’m a founder :)


The irony is that in spite of its inferiority, because we were using other Amazon services like S3 for storage, it essentially "forces" us to use other Amazon services for everything else as well.


> In addition to this, it makes you fully dependent on a single infrastructure provider; it takes away all your leverage when negotiating hosting costs.

If I JDBC into this sort of infrastructure, how does that make me dependent on AWS? It's just a different pricing model and management style. Nothing more, nothing less.

(I'm not underestimating this service, quite the contrary, I think it's very neat for people without DevOps and/or frequent load.)


> If I JDBC into this sort of infrastructure, how does that make me dependent on AWS? It's just a different pricing model and management style. Nothing more, nothing less.

Except you'll probably have different performance constraints and have to architecture your code accordingly. True decoupling is pretty hard to achieve.


You need to look at some of the work iRobot has done with Serverless. AWS has tools to manage all of this and do so at scale. By the time you need to SSH into a server to solve a problem, you’ve already failed.


This was also the biggest news of re:Invent for me. Total game changer. No need to provision large R instance types to support the batch processing that happens a 3AM and sits idle most of the time. All environmental tiers can have 100% the same configuration.

I know of some projects burning millions a month in database costs because they like to replicate environments for testing branches. Switching to serverless Aurora would reduce that cost to probably low thousands.

Of course it all depends on how quickly it can scale up to meet the load and I reserve full judgement till I can try it out, but am very interested in this.


The idea is exciting, but the pricing looks too high for side projects. The docs state "Aurora Serverless can scale from a minimum of 1 ACU to a maximum of 256 ACUs." Unless I'm reading this incorrectly, you are paying for 1 ACU 24 hours a day.

$0.06 * 24 * 365 = $525.60 per year.

This is more expensive than a low-end RDS instance. It's a shame, I immediately thought this would be the perfect solution for side projects and prototypes, but the pricing killed it.


You pay per second of usage and the ACU starts and shuts down automatically. If your DB is only actually accessed a occasionally then you are not paying for 24hrs of usage. I think the auto-scaling is the draw though, not necessarily the price. So you are only getting charged the the database is being accessed. I think the examples on the pricing page make it a little more clear.


FaunaDB Serverless Cloud has global ACID transactions with per-request pricing. You can optimize your queries for cost by looking at the response headers, or support your organization with an on-premise multitenant cluster. Learn more about pricing and features here https://fauna.com/serverless


Would be strange if it was otherwise.

The whole serverless stuff only pays if you have highly varying loads OR want to save of Ops staff.


Although not applicable for all use cases, you can use static storage as an alternative for radical cost savings:

https://github.com/Miserlou/NoDB

https://github.com/Miserlou/zappa-django-utils#using-an-s3-b...


On the other hand, for the next step up, e.g. small production services with spiky workloads, it looks great, assuming I've read the product details correctly.

Case in point, I have one service that doesn't usually need a lot of DB grunt, but has occasionally spiky and unpredictable read-heavy data extractions. It runs happily on a multi-AZ pair of db.t2.small instances, for which we currently pay $0.104/hr (ap-southeast-2). Frankly it could mostly run on a db.t2.micro but we occasionally need the headroom of the bigger instance and the availability of multi-AZ. So when Serverless Aurora comes here it'd be a shoo-in, being both cheaper and more scaleable, and with more replicas to boot.



I was about to say you don't get encryption at rest with a low-end instance, but that looks like it's no longer the case since this summer.


Supposedly it turns off when you're not using it, so you'd need a small, consistent load all year to reach that price. And if that's the case, you'd just use a low-end RDS instance.

I think the value is in that you can build something quickly with no static instance cost that scales. You can (presumably) switch to normal instance-based RDS when your usage favors its pricing.


Yes, if it's not used, you don't pay.

So just a question of pricing your service on your end...


The whole point is that it's not on 24/7 so that calculation doesnt apply. If you need it always on or don't have super spiky workloads, stick with the standard instances.

https://youtu.be/k9M7QinznHc?t=22m54s


I wish Amazon or Google would just make a simple key/value store to give minimal persistence for their serverless offerings. DynamoDB comes closest but it still overcomplicates things.


I'm not super familiar with DynamoDB except as a strict key/value store. You set a partition key, you ignore your range key, and you go--what's particularly overcomplicated about that?


Setting the number of write nodes and read nodes to match your workload. I was scaling up a system and as more things came online, the traffic became more bursty we were constantly bumping them up and wondering if we were allocating too much (probably useless fretting regardless).

Ultimately, we moved to an S3 only solution since we have such a low throughput environment and the simplicity is addictive.


Makes sense in the past, but with DynamoDB autoscaling that's largely gone by the wayside.


I hadn't heard about this feature update! Thanks for pointing me at it!


It's been a while since I played around with it but I recall having to increase the number of write nodes to load data at a reasonable rate, then drop the number back down to avoid overpaying. And there were limits on how often you could lower the number of instances so I had to be careful not to reach the limit.

Maybe I'm misremembering or that has changed?


In June this year AWS shipped DynamoDB autoscaling: https://aws.amazon.com/blogs/aws/new-auto-scaling-for-amazon...


Good to know!


Google Cloud Datastore?


That one is new to me but looks like a winner! Thanks


Worth noting that Google has essentially deprecated Cloud Datastore in favor of Firebase Cloud Firestore. As far as I can tell, its unclear what the exact relationship between these two products is, but if you have a GCP Project which has never used Cloud Datastore but has the Cloud Firestore Beta enabled, you cannot access Cloud Datstore.


FYI, you can use the Datastore API with Firstore, you just don't get the real-time features. Firestore is backwards compatible with Datastore AFAIK.

(I work for GCP)


I wasn't aware. Thanks.


AWS and Azure have fully-managed Redis as an offering.


so...firebase?


When I looked at firebase it seemed like the fact that it's designed for client-side access meant the security model got in the way for the type of stuff I was trying to do (server-side access only)


The new Firestore database has first-class support for server side SDKs. The real-time database is pretty limited when it comes to server side support, and probably always will be.


You can make it authenticated reads only and only give your server authentication. Granted, it's not the primary use case it's trying to support, but it does support it.


AWS S3


Am struggling to understand the concept of Serverless Aurora and AWS Lambda integration - they highlight that AWS Lambda would scale on traffic and there by Serverless Aurora kicks into support this scale - but there is a limitation on AWS Lambda inside a VPC[1]. So in order to really utilize the Serverless Aurora fully - the Proxy fleet would have special permission to access Aurora?

Also, has anyone solved the issue of AWS Lambda (in VPC) scaling beyond the limited IP space ?

[1]http://docs.aws.amazon.com/lambda/latest/dg/vpc.html

ENIs = Projected peak concurrent executions * (Memory in GB / 3GB)


But would you really need to put Serverless Aurora in a VPC? Is it even possible to do that? I expect it working like DynamoDB in this regard, the tables/database just exists and access is determined by IAM role.


You probably shouldn't be putting your Lambda Functions in a VPC unless it needs access to a VPC resource. There was a pretty good presentation on that in the "What's new in Serverless" seminar.

On top of that, I believe you can do a /2 CIDR which gives you over a million IPs to use. Not sure why you think that wouldn't be enough.


This is super interesting to me on applications where I'm using Heroku's $50/month Postgres plan. Almost inevitably I find that the only reason I've upgraded past hobby level is for performance, not scaling. That it would auto-scale under load is just the icing on the cake.


What performance problems were you having specifically? What are your use cases? Just curious to hear your experience.


Alas, I don't have metrics. Just seemed noticeably faster on the more expensive Postgres plan.



It looks really great. The only concern I have (relative to other serverless features) is that it does have a top-end architectural limit (256 ACUs).

It's probably not worth fretting over, but it is worth mentioning when comparing to DynamoDB. That means in building a serverless web app, you might look at starting with Serverless Aurora, graduating to regular Aurora (as usage increases due to pricing), and then perhaps moving off workloads better suited to DynamoDB as you discover them.


Any ideas what replication model they're using under the hood to bring new instances online so quickly?


I wonder if this can be accessed from Lambda VPCs?


It will be accessible via HTTP, just like all of Amazon's APIs. Doesn't matter if you are in a VPC or not.


Aurora is accessible via TCP (mysql or Postgres), not HTTP.




Applications are open for YC Summer 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: