The ability to scale down to zero could certainly be useful for automated testing use cases.
As with anything on AWS it will take some testing to discover all the quirks and caveats, but I like the general direction this is headed.
In this case, Aurora Serverless gets the name because it will scale to 0 which is useful for things like Lambda or Batch functions that only get called sporadically.
But I guess for Amazon, inefficiency translates to $$$, so wouldn't really make sense to do this.
I wonder that more generally about cloud as well. For many things making the backends multitenant would be far cheaper for the customers and just a bit more expensive for the providers ... there's the security aspect (but is it really that different ? Even with VMs the kernel and hypervisors still need to not leak information, and even with Physical servers the "reboot and recovery" and remote control mechanisms are often still avenues for security exploits)
This low level separation is easy and cheap to implement for the providers, but (it seems to me) much more expensive for the customers.
But I guess that's pretty much true for all cloud stuff. It provides automation, but at an operational cost, which is really not the trade off I'd like to make.
Once you see your environment adapt to the load on the fly, I don't think you ever want to go back to manually handling it.
Serverless sounds like the better term to me still because it sounds exactly like what it describes: you never have to deal with a server. Although, I would actually add auto scaling to the name (serverless autoscaling aurora) so users know they don't have to deal with servers nor autoscaling.
What do you guys think?
On the other hand, they came up with Bing (talk about hilariously bad), and Azure. Which they've chosen to pronounce AZH-ər (Ajer) and not AZ-yoor (Ajur, like in Côte d'Azur). This is non-obvious, and makes them sound like uncultured Americans (like when you pronounce niche as nitch; instead of like cache, which I haven't heard pronounced catch (yet)).
After all we have Container, Platform, Infrastructure as a Services.
Isn't that exactly Pricing Example 2?
The Serverless section in Aurora FAQ  is also worth reading. The main gotcha I think is:
>> Q: Why isn't my Aurora Serverless DB Cluster automatically scaling?
>> Once a scaling operation is initiated, Aurora Serverless attempts to find a scaling point, which is is a point in time at which the database can safely complete scaling. Aurora Serverless might not be able to find a scaling point if you have long-running queries or transactions in progress, or temporary tables or table locks in use.
So I think one will need to focus on quick OLTP type queries.
Still, I am quite excited to see how this shapes up.
Everytime I’ve seen infrastructure people who came into AWS from an on prem type of mindset, they screw it up. One example is that they may have a Dev, UAT, and production environment instead of using separate accounts. That works okay as long as you aren’t doing anything but EC2 instances but as soon as you start doing anything else it gets complicated because most AWS resource names have to be unique across the account.
It solves already solvable problems (albeit with more specialised tools/people) with a 'magic' solution that works until it doesn't, and then requires even more specialised people to maybe solve it.
But that's the whole profit model for IaaS: offer 'magic' solutions that are worse than what a qualified ops team will build, for the same money, to small companies who dont actually need the bells and whistles, and hope that by the time they do need the bells and whistles they're locked into your platform too much to change when they inevitably realise they made a huge mistake.
History has taught me that accepting the inevitability of change is far better than trying to build a thing with so many features that it need never be changed.
In this particular case, I find it hard to believe that anybody will be "locked in" to a serverless SQL database in the same sense that, say, being a game company on top of iOS "locks you into the Apple Ecosystem," or writing your app in Rails "locks you into Ruby."
I didn't say that did I? AWS/GCP/Azure/etc as a whole, are platforms that companies become locked-in to, both technically and in terms of mindset.
In my experience the cost savings are because people assume that AWS means "I dont need Ops (any more)" and have developers with zero ops/sysadmin experience running their production environments.
That being said, it can burn you. For an ETL platform I was developing cobbling it together with Lambda and other services was a nightmare. Things you'd expect to work didn't. And the cost was way too high for concurrency. So I went back to a single box with beanstalkd and common tools.
But I agree many go with a promised solution without the expertise and when it fails or gets hacked they're screwed. And those costs aren't factored in.
If we're not hosting and maintaining and the client doesn't have skillset in-house I'll have a client go to Rackspace or similar.
Because no one ever had scaling issue before IaaS that required major architectural changes....
True. Don’t use Aurora of any fashion for OLAP. That’s what Redshift is for.
And since it always comes up:
* Yes, there is plenty of reason to want to use MySQL with lambda. Wanting to run software on FaaS does not mean wanting to abandon rdbms. For a small app, dynamodb is overkill; for a small app that turns into a large app, dynamodb is a money pit.
* No, adding scheduled heartbeat requests to the lambda functions so they never have to cold start is not a real, long term, scalable solution. It's a hack, it doesn't solve the problem if your app actually scales up, and infrastructure shouldn't depend on horrible hacks to function correctly.
I guess everyone is just using DynamoDB with their lambda functions, but I miss a lot of the power Postgres has.
If my use case is small, that is only true because everyone else gave up on this use case, because it doesn't work, or resigned themselves to using a terrible hack to get it to work. Even though AWS Marketing heavily implies it's a great workflow, and leaves developers to run face first into its problems.
To be clear though, are you saying that there's a roadmap for fixing Lambda in VPC, or a roadmap for connecting to Aurora with Lambda without being in a VPC? At this point I'd take the latter and forget about the rest.
> The cool down period for scale-down is 15 minutes since the last scaling operation. The cool down period for scale-up is 5 minutes since the last scaling operation.
"The service currently has autoscaling cooldown periods of 1.5 minutes for scaling up and 5 minutes for scaling down"
However, it is the opposite for sales/marketing/support apparently. It'll be interesting to see where the market goes towards in the future.
google scaling > amazon scaling
Here is the article about amazons functions being way more reactive than microsofts: https://news.ycombinator.com/item?id=16099729
If you are Epic Games running Fortnite on AWS, and you want to direct millions of player connections through a brand new, freshly provisioned load balancer then you should definitely talk to support to make sure the load balancer is prewarmed and ready for that level of traffic.
But 99.99% of websites and service won't need intervention or prewarming at the load balancer level because the load balancer can and will scale up far faster than your backend server provisioning and scaling, or your database will. You only need to worry about prewarming a load balancer in the very specific conditions where you are immediately redirecting millions of active connections over to a new load balancer, and frankly there are very few companies that have that problem.
Additionally even if you do have that problem Amazon gives you the tools to solve it without needing any manual prewarming. Any blue/green traffic switchover at massive scale should probably use a weighted Route 53 DNS record set. You wouldn't immediately cut 100% of your traffic over to a new load balancer, instead you should dial it up in percentage increments while testing and monitoring the new stack. ALB and NLB can autoscale up gracefully and automatically as you increase the DNS weight on the new DNS record.
The cold-to-warm state change sounds like it's something you would only ever want to do in batch jobs.
But I can see many situations where you have a DB that only needs to actually be awake for short periods of time throughout the day for batch jobs. And even more that only need the barest minimum capacity available 24/7 with large spikes for batch-based processing.
Not sure what you're talking about. It's warm for as long as you want it to be. That's configurable. Read the post.
> You pay a flat rate per second of ACU usage, with a minimum of 5 minutes of usage each time the database is activated.
Cool idea, but an IO operation once every 5 minutes is considered (billing-wise) a full time service.
This could unlock a lot of cool potential if it had finer granularity. I can think of some IoT applications that have have infrequent operations which would be cost effective with less minimum time.
One point point was making custom lambas to spin up/down the DB and everything that was needed to use it (a NAT on EC2, and an elastic IP since the default NAT aww provides is pricey). The bot only needs to run for an hour on weekdays so there's no point keeping everything running 24/7. Excited to try replacing all of that with this!
"Scaling operations are transparent to the connected clients and applications since existing connections and session state are transferred to the new nodes."
As for the public subnet concern, you can apply security groups to your database instance to ensure only authorized CIDR ranges can call it. Additionally, you can use IAM authentication  for DB callers (only recommended for light-weight applications with low concurrency),
The bare minimum cost for Aurora PostgreSQL is $200/month. Aurora MySQL is an order of magnitude less.
Why not PostgreSQL?
Because Aurora PostgreSQL has already behind MySQL for years. Glad to hear this isn't a perpetual condition.
However if your DB load is very spiky then serverless Aurora might be cheaper than running an over provisioned instance.
Cold-to-warm takes 25 seconds, but for dev/test this shouldn't be a problem.