As I push closer and closer to launching my Redwood project using Postgres, I am struggling with the workflow for the database in production, development, and testing environments (I’ve kinda bundle dev and test right now).
The tutorial is great but it stops short of getting things set up for local development after the production push. What is the best “Redwood” way to manage development and production workflows? I came across a great post from @clairefro for local docker and postgres as well as the post by @danny for Supabase, but things are still a little foggy to structure the entire workflow.
I read through the Prisma guide to Using multiple .env files. Is this the way to setup Redwood? So far I have been commenting in and out the different environments in the .env file but know this is not sustainable and is prone to failure and accidentally wiping data or breaks from schema changes in production.
Apologies if this has been answered before, if you have em’ handy please share links that might help with the process. Thanks!
I felt the same way a few weeks ago, trying to use postgres using RedwoodJS documentation, and finding several loose ends, mostly in the area of access rights to the database. Expecting that I may not be only one I wrote this tutorial as a part of the RADC (Redwood Application Developers Cookbook) site.
As you understand, RedwoodJS core team did an incredible job of creating the RM Framework and related documentation. It should be equally obvious that the core team cannot scale up proportionally to increase of RW popularity. This is the reason, I am trying to organize RW users (app developers, that are not members of the core team - and your article I am responding to is a perfect example of RADC utility (your need may not be the most important task on core team schedule, but it could be very easy be the most important on my schedule)
If co-authoring with me the information you need, we may benefit more folks than just yourself
Let me know, please.
Hey @adriatic thanks for replying. The Redwood core team has been awesome at answering questions. That said, I know things are getting busy (1000 stars this month! ) thus I am hoping others like you in the community chime in.
Once my build is finished, if I have time, I plan to build another ‘generic’ version and document the process. In the meantime, happy to contribute to RADC in whatever way I can.
Hi @justinkuntz, could you please elaborate a bit more on what you’re looking for?
.env files are per environment, and should not be committed to your repo. Beyond this, would love to know what specifically to expand on?
It was late yesterday when I responded to @justinkuntz original post, so I forgot to mention that Doppler: Sync environment at scale could be the solution he seeks to address his issues with .env
I know you use Supabase so I am very thankful for your input. My experience with databases is limited and know from other projects in Rails that there is a better way. This might be long and a little convoluted but I’ll do my best…
Specifically, I’m looking for how to set up the
.env file the best. I have things running several ways locally on different repos. For the one that I am running Supabase…I’ve gone about it in two ways with some success.
- Run Supabase locally with a shadow database. This spins up Docker and I can Prisma migrate locally fine.
// This is the port I spun the Docker up on
// Local Supabase after supabase init and supabase start
SUPABASE_KEY=[given in the terminal on spinup]
SUPABASE_JWT_SECRET=[given in the terminal on spinup]
SHADOW_DATABASE_URL=[Based on spinup]
// Another set of connections for Supabase in "the cloud" right now for production only
# DATABASE_URL=postgresql://postgres:[supaandpass].supabase.co:5432/postgres <-- I assume this is bad
# SUPABASE_KEY=[from Supabase settings]
# SUPABASE_JWT_SECRET=[from Supabase settings]
This approach worked except when running auth. I don’t believe Supabase is supporting the local dev environment for auth and to be honest I didn’t even try. I would just comment in and out the local vs. “cloud” when pushing changes to production.
- The second method was to set up Supabase Auth and run a local Postgres instance in Docker.
// Based on docker-compose.yml
// Supabase Cloud - comment out Database here
SUPABASE_KEY=[from Supabase settings]
SUPABASE_JWT_SECRET=[from Supabase settings]
As I move closer to production, I know things need to be set up more sustainably and securely. I haven’t even got to the wonderful solution that you posted nor appended the connections with
I also made it successfully through the tutorial a few times and deployed it to Netlify. I used Railway but also struggled with the next steps on how to make a clean production, development, and testing environment.
I know each setup is going to vary but am looking for a general config that works for many if that makes sense. Again thank you!
You can have multiple
.env files per environment.
.env will load by default, but if you make a
.env.production that will override any values in there when run with
So I’ll have production endpoints in there and if I need to run a script against those servers I can do:
NODE_ENV=production yarn rw exec scriptName
Great thread. Thanks @justinkuntz for setting the context! I have a lot of the same questions. Did you land on an optimal set-up in the end? If so, would love to see it.
Reading the dotenv docs, setting up multiple
.env files isn’t recommended so I’ve been reluctant to do so.
Some v basic questions that I have:
- What’s the best way to manage multiple environments, e.g. dev, test, staging, production? Is having a separate branch for each with unique .env & config files sufficient or recommended?
NODE_ENV=production need to be specified in the production env or is
NODE_ENV assumed to be equal to
production if the variable is excluded (or does this depend on the hosting provider?)