Limitations using AWS DBs with Netlify Functions

This is based on @thedavid’s comment here.

As currently documented on Redwood’s website, I tried and failed to set up my PostgreSQL using AWS RDS Proxy.

The reason is my Redwood app is hosted on a non-enterprise plan on Netlify and the RDS Proxy requires the Lambda Functions to be in the same VPC. On Netlify’s enterprise plans, you can use your own AWS account so this might be solvable.

I’m just creating this post to see if anyone has any other insights around this.

I’ve also just opened a merge request to add a reminder about this limitation to the docs.

2 Likes

Thanks @betocmn – and especially for opening the PR!

I haven’t set up a Redwood App with Netlify + AWS DB, but I know of several apps that have – and all without the Enterprise plan. Listing what I know here to see if they’re available to chime in:

  • @rob is running an app with AWS Postgres (correct?)
  • The first Redwood app launched (by our knowledge) is https://predictcovid.com/ and uses an AWS DB.
  • And @chris-hailstorm is running an app(s) that connect to multiple different types of AWS DB/Storage.

So it sounds possible, but clearly at the best is frustratingly complex. I know the Tape.sh team tried like you and ended up using Digital Ocean (with pg bouncer) and have been very satisfied.

If anything this is definitely good feedback to Netlify about current challenges running Redwood.

Hey @betocmn, you’re correct about RDS proxy, as David mentioned, we ended up using digital ocean’s managed DB. Some other options we explored:

  • https://smartdb.io/
    This looks very promising and cost effective, but at the time it was missing something we needed (can’t remember what)
  • connecting directly to AWS RDS

In terms of performance, we are indeed very happy with DO. I did some testing against an authenticated graphql endpoint and roughly we saw without connection pooling we were getting 10-30 responses concurrently per second, before the db connection started erroring out (on small instances). With pg_bouncer on digital ocean, the DB didn’t flinch, we kept going something like 80-150 TPS, and that too only because cloudflare/netlify was throttling requests (my tests weren’t very sophisticated).

My partner @iggy was meant to write a guide about it - sure he’ll get round to it soon.

If you’re up for it, you could also setup your own instance of pg_bouncer on AWS to use with RDS.

Just a headsup - I’m not a DB or infrastructure expert :). Please do let us know how you get on!

1 Like

I’ve done both AWS RDS Postgres and AWS RDS Aurora (MySQL compatible) with no issues. The downside is that you need to make them accessible to the entire internet so make sure you have a good username/password. :grimacing:(Technically you could limit access to just us-east-1 IP addresses, but there are MILLIONS of them and it would be a huge drag to try and maintain that list.)

Assuming your functions are well behaved they should only use one connection and close it once they’re done. But if you suddenly had a spike in traffic, there could be a lot of connections that get opened as AWS spins up a ton of Lambdas to meet the demand. We did a test before predictcovid.com went live and I think we got up like 10,000 requests a minute and it had opened something like ~700 db connections. We were testing against a 2xlarge or 4xlarge RDS Aurora instance I believe.

2 Likes

Thanks so much, everyone, a lot of super helpful information.

Maybe I should’ve titled the topic as “Limitations using AWS RDS Proxy with Netlify Functions” as I meant having the issue when following the advice from the Redwood Docs about Connection Pooling.

@danny SmartDB looks cool, but I got in touch to understand how it works behind the scenes (I can’t find public docs about how it works exactly).

@rob great to know about the specific instance size using RDS directly on production. I think that’s how I will start, just getting a large RDS instance and running a bunch of load tests and scaling down as I learn it.

I will also update my pull request to include more information after this discussion. Thanks again!

2 Likes