Hacker News new | comments | show | ask | jobs | submit login
Amazon Aurora Serverless (amazon.com)
111 points by tiagobraw 8 days ago | hide | past | web | favorite | 74 comments





I feel like the choice of the name "serverless" is unnecessarily confusing/controversial, but I like the concept.

The ability to scale down to zero could certainly be useful for automated testing use cases.

As with anything on AWS it will take some testing to discover all the quirks and caveats, but I like the general direction this is headed.


AWS tends to put the name "serverless" to anything that is designed to work with the rest of their serverless platform.

In this case, Aurora Serverless gets the name because it will scale to 0 which is useful for things like Lambda or Batch functions that only get called sporadically.


I don't get why using a multitenant setup for MySQL (like in the PHP days) isn't a much better solution for this, as it doesn't require many instances of MySQL (or many VMs each running a single instance), nor does it require constant resources. This does require constant resource use, and you incur the significant startup delay for what seems to me very little reason.

But I guess for Amazon, inefficiency translates to $$$, so wouldn't really make sense to do this.

I wonder that more generally about cloud as well. For many things making the backends multitenant would be far cheaper for the customers and just a bit more expensive for the providers ... there's the security aspect (but is it really that different ? Even with VMs the kernel and hypervisors still need to not leak information, and even with Physical servers the "reboot and recovery" and remote control mechanisms are often still avenues for security exploits)

This low level separation is easy and cheap to implement for the providers, but (it seems to me) much more expensive for the customers.

But I guess that's pretty much true for all cloud stuff. It provides automation, but at an operational cost, which is really not the trade off I'd like to make.


The general trend is serverless because not thinking about scaling is a very nice idea for developers and companies.

Once you see your environment adapt to the load on the fly, I don't think you ever want to go back to manually handling it.


Initially, I was responding to agree with you. However, my first renaming thought was for autoscaling aurora which I realized was incorrect because you still have to deal with things like server security updates.

Serverless sounds like the better term to me still because it sounds exactly like what it describes: you never have to deal with a server. Although, I would actually add auto scaling to the name (serverless autoscaling aurora) so users know they don't have to deal with servers nor autoscaling.

What do you guys think?


You don't really "deal with" servers in regular Aurora, but you do pay for them. So AFAIK regular Aurora is billed by the hour while "serverless" Aurora is by the hour but only during hours when it's being used. And serverless Lambda is billed per request, so it seems like AWS isn't even using a consistent definition.

Don't you? It has been a while since I set one up but I'm pretty sure I had to set the versions and I think you have to launch 2 servers because they have to be part of a "cluster". Shit, I remember I was trying to delete them because I was just experimenting with it to compare with our RDS instances but I couldn't figure out how to delete them (aurora instance was slave to our normal RDS instance and the console kept complaining about something when I tried to delete them after my testing was done. My poor team is probably still paying for those instances)

Can any marketers comment because I feel like this has gotta be purely due to marketers not wanting their product to become a generic term since litery no company names any product in a way that directly describes their product.

So Azure SQL Database, Azure Virtual Machines, Azure Bot Service does not describe the products? AWS is hilariousy bad at naming. Not all companies are.

Azure does have a benefit of hindsight, and often their stuff is just named Azure + Obvious Name.

On the other hand, they came up with Bing (talk about hilariously bad), and Azure. Which they've chosen to pronounce AZH-ər (Ajer) and not AZ-yoor (Ajur, like in Côte d'Azur). This is non-obvious, and makes them sound like uncultured Americans (like when you pronounce niche as nitch; instead of like cache, which I haven't heard pronounced catch (yet)).


Yes. In my view, obvious names makes it easier to remember what something is or does. Understanding what Azure DNS do is easier than finding out and remembering what AWS Route 52 does.

*53. It's the DNS port number, but yes, not a great name.

I very much prefer AWS names to the Azure naming, but I don't like Windows culture or behavior either so I think it's just general dislike of Microsoft software.

It sounds like you describe something which is irrational.

Function as a Services? ( FaaS )

After all we have Container, Platform, Infrastructure as a Services.


Second that. Serverless is a buzzword like AI. But the concept is brilliant from many perspectives. It may save architects and developers from db scaling headaches.

> The ability to scale down to zero could certainly be useful for automated testing use cases.

Isn't that exactly Pricing Example 2?


Yup. I was just saying it is a useful feature.

This really is a new paradigm shift for relational databases. In general, on the DB side, you are not able to scale horizontally like application tier. So for the fear of degradation of service, most of us end up overprovisioning on the DB side. With Serverless, auto-scaling Aurora, I think things will change for the DB tier too.

The Serverless section in Aurora FAQ [1] is also worth reading. The main gotcha I think is:

>> Q: Why isn't my Aurora Serverless DB Cluster automatically scaling?

>> Once a scaling operation is initiated, Aurora Serverless attempts to find a scaling point, which is is a point in time at which the database can safely complete scaling. Aurora Serverless might not be able to find a scaling point if you have long-running queries or transactions in progress, or temporary tables or table locks in use.

So I think one will need to focus on quick OLTP type queries.

Still, I am quite excited to see how this shapes up.

[1]: https://aws.amazon.com/rds/aurora/faqs/


Yes, I think serverless is only possible now, because application developers are now familiar with infrastructure to a point that infrastructure provider and app developers can commonly understand the most crucial technical problems and tradeoffs involved in provisioning infrastructure.

I’ve found developers are more familiar with cloud based infrastructure than “infrastructure” people coming to AWS.

Everytime I’ve seen infrastructure people who came into AWS from an on prem type of mindset, they screw it up. One example is that they may have a Dev, UAT, and production environment instead of using separate accounts. That works okay as long as you aren’t doing anything but EC2 instances but as soon as you start doing anything else it gets complicated because most AWS resource names have to be unique across the account.


The 'main gotcha' seems like the common thing people always gloss over when using IaaS.

It solves already solvable problems (albeit with more specialised tools/people) with a 'magic' solution that works until it doesn't, and then requires even more specialised people to maybe solve it.

But that's the whole profit model for IaaS: offer 'magic' solutions that are worse than what a qualified ops team will build, for the same money, to small companies who dont actually need the bells and whistles, and hope that by the time they do need the bells and whistles they're locked into your platform too much to change when they inevitably realise they made a huge mistake.


Well, there is a general thing where small companies build a thing, and then they scale to the point where the thing doesn't work any more, and they have to make some changes.

History has taught me that accepting the inevitability of change is far better than trying to build a thing with so many features that it need never be changed.

In this particular case, I find it hard to believe that anybody will be "locked in" to a serverless SQL database in the same sense that, say, being a game company on top of iOS "locks you into the Apple Ecosystem," or writing your app in Rails "locks you into Ruby."


> I find it hard to believe that anybody will be "locked in" to a serverless SQL database

I didn't say that did I? AWS/GCP/Azure/etc as a whole, are platforms that companies become locked-in to, both technically and in terms of mindset.


I appreciate the skepticism, but "for the same money" is off base in my experience. Maintaining IaaS services costs much less when taking personnel expenses into account.

> Maintaining IaaS services costs much less when taking personnel expenses

In my experience the cost savings are because people assume that AWS means "I dont need Ops (any more)" and have developers with zero ops/sysadmin experience running their production environments.


There's a lot of us devs that have been at it long enough to have learned sysadmin and networking skills because we had to at the time. Older full stack devs. We don't need sysadmins for 99% of projects. AWS and other solutions work for us and save a lot of time. We know our way around it since we've been using it since it's infancy.

That being said, it can burn you. For an ETL platform I was developing cobbling it together with Lambda and other services was a nightmare. Things you'd expect to work didn't. And the cost was way too high for concurrency. So I went back to a single box with beanstalkd and common tools.

But I agree many go with a promised solution without the expertise and when it fails or gets hacked they're screwed. And those costs aren't factored in.

If we're not hosting and maintaining and the client doesn't have skillset in-house I'll have a client go to Rackspace or similar.


I’m a developer, but I can hold my own against most “AWS Architects”. Most of them only know netops and they don’t understand how the infrastructure decisions they make affect the other two portions of using AWS - devops and development.

It solves already solvable problems (albeit with more specialised tools/people) with a 'magic' solution that works until it doesn't, and then requires even more specialised people to maybe solve it.

Because no one ever had scaling issue before IaaS that required major architectural changes....


So I think one will need to focus on quick OLTP type queries.

True. Don’t use Aurora of any fashion for OLAP. That’s what Redshift is for.


This is extremely frustrating. One of the biggest use cases for wanting to use Aurora serverless is connecting to MySQL using lambda functions -- Function as a Service apps that can still use rdbms. The problem is that RDS endpoints only exist inside vpcs, and cold starting a lambda function with a vpc network interface can take 10+ seconds, making it useless for an API. I had hoped that, given they were making a serverless MySQL service, they'd make sure it actually plays well with lambda. Nope. Same problems regular RDS has. No better method of securing the connection beyond vpc firewall rules. Amazon, it's been a problem since lambda was launched and it hasn't been fixed. Either fix lambda vpc cold start times, or provide a better way to connect lambda to RDS. Just burying the problem only pisses off your customers when they try to buy into your hype.

And since it always comes up:

* Yes, there is plenty of reason to want to use MySQL with lambda. Wanting to run software on FaaS does not mean wanting to abandon rdbms. For a small app, dynamodb is overkill; for a small app that turns into a large app, dynamodb is a money pit.

* No, adding scheduled heartbeat requests to the lambda functions so they never have to cold start is not a real, long term, scalable solution. It's a hack, it doesn't solve the problem if your app actually scales up, and infrastructure shouldn't depend on horrible hacks to function correctly.


I was disappointed to see this as well. I have just recently started to learn a bit more about lambda and the rds/vpc thing has been a bit of a sticking point. I had thought aurora-serverless would fix the vpc issue, I appreciate the autoscaling but I think at this point I’d still rather use rds/Postgres if it has to be in a vpc anyway. Assuming I’ll save more time using postgresql advanced features than money on the autoscaling.

I guess everyone is just using DynamoDB with their lambda functions, but I miss a lot of the power Postgres has.


What's missing is support for Cloudformation. Why? Because then someone will write a plug-in for the Serverless framework. Then using Aurora Serverless will be simple, no trying to create VPC's as it will be all scripted behind the scenes for you. There's only a single piece remaining and then Serverless will totally boom.

That’s only part of it, the long startup time is still there, even with cloud formation support. You can easily hack it all together with serverless framework now, and I’m really loving it for learning things but that startup time is going to be a problem for anything customer facing.

Creating a custom lambda backed resource and calling it from Cloud Formation is not hard. I’m currently using two - one to create a secure parameter in Parameter Store and one to create an AMI from an EC2 instance that I then use to create s launch configuration.

I’m sorry that you’re frustrated by our incremental improvement. This solves many problems for many people and I’m sorry it doesn’t solve your specific use case. That said, the problems you mentioned are definitely on the roadmap. Give us some time. We’re not burying the problem.

VPC NICs for Lambda were released in February 2016, and API Gateway + Lambda is heavily pushed as a use case. It's been 2 1/2 years, numerous new features have been released since then, both for Lamda and RDS, and this has not been resolved. The problem was apparent as soon as the feature was released. There are hundreds of blog articles about the problem, all of which tell you to just never let your Lambda instances thaw to try to combat it. Numerous frameworks for Lambda include automated plugins to schedule warming the lambda function every 5 minutes. It's been an issue since day 1, and for 2 years a solution has been "coming soon".

If my use case is small, that is only true because everyone else gave up on this use case, because it doesn't work, or resigned themselves to using a terrible hack to get it to work. Even though AWS Marketing heavily implies it's a great workflow, and leaves developers to run face first into its problems.

To be clear though, are you saying that there's a roadmap for fixing Lambda in VPC, or a roadmap for connecting to Aurora with Lambda without being in a VPC? At this point I'd take the latter and forget about the rest.


You're welcome to email me randhunt@amazon.com

Assuming this hasn't changed since the preview started:

> The cool down period for scale-down is 15 minutes since the last scaling operation. The cool down period for scale-up is 5 minutes since the last scaling operation.


It has according to this blog post: https://aws.amazon.com/blogs/aws/aurora-serverless-ga/

"The service currently has autoscaling cooldown periods of 1.5 minutes for scaling up and 5 minutes for scaling down"


From what I have read here / experienced personally, google scaling > amazon scaling > microsoft scaling and each one is an order of magnitude greater in performance (there is an article on microsoft functions vs Amazon lambda which gives you the idea and don't even get me started on google tech versus microsoft/amazon).

However, it is the opposite for sales/marketing/support apparently. It'll be interesting to see where the market goes towards in the future.


I think Azure is pretty damn good for certain things.

http://muratbuffalo.blogspot.com/2018/08/azure-cosmos-db.htm...


    google scaling > amazon scaling
???

I've read many comments here about how googles load balancers are way more reactive that amazons as amazons needs "warming" to get them ready. I've also heard great things about app engines dynamic scaling. Unfortunately, I cannot find any article purely related to it.

Here is the article about amazons functions being way more reactive than microsofts: https://news.ycombinator.com/item?id=16099729


Amazon employee here. I can confirm that ALB and NLB do not need prewarming for most customers.

If you are Epic Games running Fortnite on AWS, and you want to direct millions of player connections through a brand new, freshly provisioned load balancer then you should definitely talk to support to make sure the load balancer is prewarmed and ready for that level of traffic.

But 99.99% of websites and service won't need intervention or prewarming at the load balancer level because the load balancer can and will scale up far faster than your backend server provisioning and scaling, or your database will. You only need to worry about prewarming a load balancer in the very specific conditions where you are immediately redirecting millions of active connections over to a new load balancer, and frankly there are very few companies that have that problem.

Additionally even if you do have that problem Amazon gives you the tools to solve it without needing any manual prewarming. Any blue/green traffic switchover at massive scale should probably use a weighted Route 53 DNS record set. You wouldn't immediately cut 100% of your traffic over to a new load balancer, instead you should dial it up in percentage increments while testing and monitoring the new stack. ALB and NLB can autoscale up gracefully and automatically as you increase the DNS weight on the new DNS record.


Joke of the week.

You have to respect the reputation Google accumulated over the years. And no one will reveal anything to settle this debate.

My biggest concern is how long will it take to warm up from cold? But it's good knowing that it'll remain warm for 15 min.

It says 25 seconds on average.

The cold-to-warm state change sounds like it's something you would only ever want to do in batch jobs.

But I can see many situations where you have a DB that only needs to actually be awake for short periods of time throughout the day for batch jobs. And even more that only need the barest minimum capacity available 24/7 with large spikes for batch-based processing.


> But it's good knowing that it'll remain warm for 15 min.

Not sure what you're talking about. It's warm for as long as you want it to be. That's configurable. Read the post.


Very exciting! I made a simple bot to scrape lunch specials menus off Instagram and post them to my company's slack (http://blog.matthewbrunelle.com/projects/2018/05/07/Soup-Bot...).

One point point was making custom lambas to spin up/down the DB and everything that was needed to use it (a NAT on EC2, and an elastic IP since the default NAT aww provides is pricey). The bot only needs to run for an hour on weekdays so there's no point keeping everything running 24/7. Excited to try replacing all of that with this!


> Pay only for the database resources you consume, on a per-second basis.

> You pay a flat rate per second of ACU usage, with a minimum of 5 minutes of usage each time the database is activated.

Cool idea, but an IO operation once every 5 minutes is considered (billing-wise) a full time service.

This could unlock a lot of cool potential if it had finer granularity. I can think of some IoT applications that have have infrequent operations which would be cost effective with less minimum time.


I had assumed that this would be a HTTPS service with a public end-point, available outside a VPC. But, looks like it is still deployed inside the VPC and accessible over JDBC. Did anyone find any configuration option to deploy it outside VPC? Overall, it looks like good old Aurora RDS with an auto start/stop option, and costing based on that start/stop duration.

Aurora requires a VPC, but you can create public subnets to allow for external access to the database.

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/...


but then, placing the instance in a public subnet is also not a good idea. And moreover, JDBC connections are not optimized for serverless scenarios.. the cost of connecting/dis-connecting are too high when compared to a light-weight HTTP. And any vulnerability can be potentially exploited. I was hoping to see a true HTTP service, highly optimized for Lambda clients.

The blog post [1] provides additional insights that imply it can work well clients despite its ephemerality, and is optimized for client connection pools.

"Scaling operations are transparent to the connected clients and applications since existing connections and session state are transferred to the new nodes."

As for the public subnet concern, you can apply security groups to your database instance to ensure only authorized CIDR ranges can call it. Additionally, you can use IAM authentication [2] for DB callers (only recommended for light-weight applications with low concurrency),

[1] https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Using...

[2] https://aws.amazon.com/blogs/aws/aurora-serverless-ga


VPC only for now

AWS, why not PostgreSQL?

The bare minimum cost for Aurora PostgreSQL is $200/month. Aurora MySQL is an order of magnitude less.

Why not PostgreSQL?


It's on the way.

That's good to hear.

Because Aurora PostgreSQL has already behind MySQL for years. Glad to hear this isn't a perpetual condition.


I have to say, it's really frustrating that Postgres is a second class citizen in the AWS ecosystem. Aurora in general is often far behind on Postgres support relative to MySQL. I hope this changes going forward.

Isn't AWS Redshift based on Postgresgl ?

I'm excited for this. I have a few small, low budget projects that would either require expensive RDS instances or awkward data management to force the use of Dynamo. Now there appears to be a low cost SQL solution for serverless projects.

You say low cost, but if I'm reading that article right, one of the lowest settings (2 availability units) is 12 cents per hour, so if it's being minimally used 100% of the time, it's still a good $86 per month. You can get a cheap DB instance or hosting outside of Amazon for a fraction of that. And it goes up if the DB gets some actual load.

If your DB load is consistent and the cost of serverless outweights standard instance then yes go for a standard instance.

However if your DB load is very spiky then serverless Aurora might be cheaper than running an over provisioned instance.


Just need to give the option to run at 1 unit.. that'd get you in the ballpark of a normal t2.small server..

This definitely works from a cost pov. We use a small instance of AWS Aurora which is mostly idle. If we go this route we will need to see how quick the cold-start is and how quick the ACU scaling is. Serverless SQL DB - definitely a nice way to to go.

Is serverless Aurora available in a multi-az high availability form as well?

It is multi-az yes.

For testing and development databases, would prefer to scale to zero. Weird that they don't offer that as an option.

It does scale to 0 when there are no connections to the DB.

Cold-to-warm takes 25 seconds, but for dev/test this shouldn't be a problem.


Ah, I totally misread this line: "Aurora Serverless can scale from a minimum of 2 ACUs to a maximum of 256 ACUs. You can specify the minimum and maximum ACUs your database can consume." I thought it meant you had to always run 2 ACUs.

My bad.


Isn't that Pricing Example 2?

literally in the first two paragraphs of the article.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: