I was able to deploy the API using serverless.com on AWS - I used HTTP API type with a lambda proxy in the API Gateway because it’s considerably cheaper than the REST API mode. Then I used terraform to create a new VPC and subnets with custom ACLs and deployed an Aurora Serverless Cluster in the VPC. To allow the lambda to talk to the database, I had to assign lambda to the specific VPC and subnets and have the correct security group. There were a lot of quirks but I finally managed to get it work. Also to connect to the DB locally, I had to setup a bastion which tunnels to the DB.
Some things I noticed:
If you place Lambda in a VPC, it won’t have access to the public internet. You’ll have to setup a NAT Gateway (which costs ~40$ a month) in a public subnet and place the Lambda in a private subnet to allow it to access the internet.
Serverless Aurora is great but the cold starts can get pretty long (~30 seconds). You can keep it so that its always on, but for Postgres that will cost ~80$/month and ~40$/month for MySQL (Hopefully it’ll be 40$ for Postgres too in the near future)
I like the secure by default option for Aurora Serverless. It disallows public access and can only be accessed in a VPC. For all other managed database services like Heroku, Digital Ocean etc the db will have to be public to be accessed by lambdas in AWS, which is not very secure. For secure DBs, you’ll likely have to stick to one cloud provider.
Deploying the API directly on AWS will require just a little more effort, but will result in considerable cost savings. This is a tradeoff between ease of deployment vs cost. - The Data API is available for Aurora Serverless and there is an issue to support it in Prisma - Add support for AWS Data API (AWS Aurora Serverless) · Issue #1964 · prisma/prisma · GitHub. This will be great for connection limits and pooling issues for serverless functions. Till then I guess the only option is to work around (eg. limit concurrent lambda execution etc). This will also remove the need to put Lambda in the same VPC. You will be able to attach an IAM role to Lambda and connect to the serverless DB without worrying about secrets - the ideal scenario hopefully.
These are the main things I noticed. I’ll add more if something comes to mind. Very interested and happy to discuss this topic further.
thanks for this. definitely pertinent as neither Netlify or Vercel will give you a static ip.
i have done something similar with the lambda in vpc - one challenge here is that you still have to have a mechanism for securely connecting to the DB. you can have the proxy lambda check an api key or something in the header (and i’ve done this at scale with AWS Api Gateway - would not recommend), but then you need to safely store the secrets somewhere not on the client.
the best i can figure out is a pattern where a netlify function stores the secrets, but it feels super clunky.
very interesting to think about Aurora. aside from the challenges you mention, it seems great.
can you post your Terraform code or more details here for interested folks?
another thing - I am looking this weekend at using Vercel integrations with Google Cloud to try and fill this gap. it seems promising. see: Integrations – Dashboard – Vercel
of course i barely know GCP but i’d take it to avoid the troubles you mention.
would also love your thoughts on my other post i am now going to shamelessly plug:
If your DB is in a VPC and you put the Lambda in the same VPC, you can easily connect to the DB using IAM Authentication for normal RDS or just simple DB Url for Aurora Serverless. Ideal would be IAM authentication since you don’t have to manage secrets. Yeah I can post the terraform code here.
Here’s my main.tf, note - there’s also variables.tf and outputs.tf, and an optional backend.tf depending on whether you are using a remote backend. I used terraform cloud for my experiments.