How are you scaling your database when deploying on Serverless/Vercel/Netlify/Lambda?

This question is bigger than just Redwood, but I figured some people here might have some helpful thoughts.

I’m interested in deploying on Vercel but I’m worried about the scalability of using Postgres as a backend due to the overhead of database connections for each request. I know Prisma recommends using a connection pool such as pgbouncer, but even that sounds like it might not be good enough. I know Aurora Serverless has a Data API but it is not yet supported by Prisma.

Has anyone actually experimented with this or had any luck in production?

3 Likes

Hi @cjroth. Welcome to the forums!

The production RW apps I know of all use Postgres. Making sure that your DB and your API lambda functions are all deployed in the same geographical location (like having it all on AWS us-east-1), and, as you say, using a connection pooler both go a long way to making DB access performant.

But I’m super interested in knowing where the limits are. What you could do is set up a basic app and run some load tests. If you do, and share the results, it’s something that would benefit the whole community. But only if you want, of course no pressure :slight_smile:

Good idea, I might do that when I find some time :slight_smile: Do you recommend and tools for load testing?

I’m curious to hear some case studies about the connection pool in production and how much load they can handle.

Hi guys,

I’m hitting my first load issues with Supabase and they are asking if I am using pgBouncer – so I’m very interested in this same topic

Thanks
Al;

Are you using pgBouncer?

If not, you should.

Please see the documentation for connection pooling with Prisma here as to why:

There are notes too specific for Supabase:

Some useful videos on pooling:

Prisma aslo has a pgBouncer alternative planned called the Prisma Data Proxy:

I would love to be using connection pooling! Tons of years ago we were required to do it with our Java code. All of my attempts so far have failed.

I tried using these connection Urls with different combinations of Supabase pooling (session, transaction, statement) Statement seems to be the best option.

DATABASE_URL=postgresql://postgres:@db.ofdqpishocderuurtmbx.supabase.co:6543/postgres

DATABASE_URL=postgresql://postgres:@db.ofdqpishocderuurtmbx.supabase.co:6543/postgres?pgbouncer=true

Both of which failed very fast and hard. So I’m back here again… And rate limiting my bulk actions to 10/sec

DATABASE_URL=postgresql://postgres:@db.ofdqpishocderuurtmbx.supabase.co:5432/postgres

You cannot simply use those connection strings – you also need to enable connection pooling on your Supabase project:

Note: If your project is very old, it may not have this option. You will have to create a new one and migrate over.

The docs in Redwood didn’t say that specifically (since their setup could change and they are the primary source for setup and config) but did link to the Supabase site with

Prisma requires Transaction mode:

I just created a new DB on SupaBase and connect to it using the connection pooling url. Works great!
But what’s RW’s solution to the migration issue?

Prisma Migrate uses database transactions to check out the current state of the database and the migrations table. However, the Migration Engine is designed to use a single connection to the database, and does not support connection pooling with PgBouncer. If you attempt to run Prisma Migrate commands in any environment that uses PgBouncer for connection pooling, you might see the following error:

Error: undefined: Database errorError querying the database: db error: ERROR: prepared statement "s0" already exists

To work around this issue, you must connect directly to the database rather than going through PgBouncer. How to achieve this depends on your setup or provider:

But what’s RW’s solution to the migration issue?

There is an open issue for this on Prisma’s GitHub and another issue explaining the problem in more detail. You’ll see a familiar face in the first link:

The workaround I was thinking of for now, is to override the database url in the migrate command only like:

in my CI

DATABASE_URL=$MIGRATE_DB_URL yarn prisma db migrate prod

Thanks @ajcwebdev
Lots of familiar faces at that link, not just @danny :slight_smile:

Aurora has 2 engines, one for Mysql and one for Postgres - we’re about to deploy on prod using it ( postgres ).
Would it fit your needs?