Articles

AWS Aurora Serverless V2

May 12, 2022 | 4 minutes read

I dug into Aurora “Serverless” v2 on release day with one of my clients - we were hoping for a magic bullet.

Sadly, much like v1, v2 is still missing the “serverless” piece.

Andreas Wittig wrote up an excellent high level summary of the changes that AWS Aurora “Serverless” V2 brings.

You can read Andreas' comments on LinkedIn. Thanks for this summary Andreas!

What this is, really, is just RDS Aurora Rapid Autoscaling. Don’t get me wrong - it’s progress! It’s a major step forward for RDS in terms of managing scaling.

Calling this “serverless” is just like calling DynamoDB “serverless” between 2012 and 2018 - or the rest of RDS “serverless” - I don’t manage the servers, but I do have to manage capacity even if you don’t call them “server instances”.

Andreas mentioned the minimum cost of $43 per month - but the reality is that’s a single node in your cluster. With Aurora, you manage each node seperately. If you want a multi-az failover node for real production workloads, that’s going to cost another $43. Nodes don’t get spun up on demand, they only scale up rapidly from what’s running, so HA == 2 or more nodes. The ACU scaling limits can be managed seperately for the “readers”, if you so choose. But choose wisely! This is discussed in https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.how-it-works.html

Here’s how I define serverless:

  • There is no theoretical minimum or maximum scaling limit.

  • There is no minimum cost (other than storage or other supporting resources).

  • You pay for the actual work performed (so per-request, in effect). (remember how SimpleDB actually returned the info you needed to compute your request cost in the response? I miss that…)

Aurora “Serverless” v2 fails all of these tests. The actual serverless services such as S3, R53, SDB, DDB o-d and Lambda all pass these tests.

There is hope though - an autoscaling SQL solution is epic. A major step forward. I dream that in the next few years we’ll get an API to access this service (and matching client libraries), and a true serverless option. There is certainly precident for this happening.

When DynamoDB was released in 2012 it was completely manually scaled - it took 5 years before autoscaling was released in 2017. By that time the community (including myself) had built our own tools to manage the autoscaling - and at release, AWS’s answer wasn’t any better. (it just meant we didn’t have to run our own autoscaler infrastructure). It was crippled by the same limits. In 2018 we got DynamoDB on-demand, which felt like the beginning of a real answer.

Keep in mind - DDB o-d isn’t AWS’s first pure serverless database - We have S3, SimpleDB, and Route53 as prior art. Aurora Serverless definitely isn’t that.

Thinking about running Serverless v2 in production? A few thoughts:

  • maximum scaling isn’t infinity - it’s only 128 ACU. ($11k/m in us-east1 per node)

  • TEST it’s autoscaling model against your data access patterns and your application behaviors under load. It’s fast, not instant. You may still have to handle an overloaded database in a rapid spike. The docu simply says “The time it takes for an Aurora Serverless v2 DB instance to scale from its minimum capacity to its maximum capacity depends on the difference between its minimum and maximum ACU values.”

  • max_connections is - as always - a major issue. Just like with regular RDS and Aurora, max_conn’s is dynamically derived from the memory footprint of the server (haha!) associated with the current ACU. Except when you set 0.5 ACU as your minimum and use PostgreSQL, then its fixed at 2000 regardless of current or maximum ACU.

  • buffer pool sizes are similar

It’s not clear if these can be overriden in your param group like they can be for regular RDS (we used to push these a LOT higher than the defaults for some workloads, they’re tuned very “safe” out of the box). If someone tests this, please share!

  • changing ACU min/max may require a (manual or scheduled) reboot for some of the dynamic parameter changes (like max_connections).

Here’s the closest I can find to “documentation” on how max_connections (and buffer pools) relate to ACUs, and it’s pretty muddy. https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.setting-capacity.html

I’m always happy to chat if anyone wants to discuss this and how it might apply to the software you’re running in production.