Development and Production Workflow

As I push closer and closer to launching my Redwood project using Postgres, I am struggling with the workflow for the database in production, development, and testing environments (I’ve kinda bundle dev and test right now).

The tutorial is great but it stops short of getting things set up for local development after the production push. What is the best “Redwood” way to manage development and production workflows? I came across a great post from @clairefro for local docker and postgres as well as the post by @danny for Supabase, but things are still a little foggy to structure the entire workflow.

I read through the Prisma guide to Using multiple .env files. Is this the way to setup Redwood? So far I have been commenting in and out the different environments in the .env file but know this is not sustainable and is prone to failure and accidentally wiping data or breaks from schema changes in production.

Apologies if this has been answered before, if you have em’ handy please share links that might help with the process. Thanks!

2 Likes

Hello, @justinkuntz

I felt the same way a few weeks ago, trying to use postgres using RedwoodJS documentation, and finding several loose ends, mostly in the area of access rights to the database. Expecting that I may not be only one I wrote this tutorial as a part of the RADC (Redwood Application Developers Cookbook) site.

As you understand, RedwoodJS core team did an incredible job of creating the RM Framework and related documentation. It should be equally obvious that the core team cannot scale up proportionally to increase of RW popularity. This is the reason, I am trying to organize RW users (app developers, that are not members of the core team - and your article I am responding to is a perfect example of RADC utility (your need may not be the most important task on core team schedule, but it could be very easy be the most important on my schedule)

If co-authoring with me the information you need, we may benefit more folks than just yourself :smiley:

Let me know, please.

Hey @adriatic thanks for replying. The Redwood core team has been awesome at answering questions. That said, I know things are getting busy (1000 stars this month! :fire:) thus I am hoping others like you in the community chime in.

Once my build is finished, if I have time, I plan to build another ‘generic’ version and document the process. In the meantime, happy to contribute to RADC in whatever way I can.

Hi @justinkuntz, could you please elaborate a bit more on what you’re looking for?

.env files are per environment, and should not be committed to your repo. Beyond this, would love to know what specifically to expand on?

It was late yesterday when I responded to @justinkuntz original post, so I forgot to mention that Doppler: Sync environment at scale could be the solution he seeks to address his issues with .env

I know you use Supabase so I am very thankful for your input. My experience with databases is limited and know from other projects in Rails that there is a better way. This might be long and a little convoluted but I’ll do my best…

Specifically, I’m looking for how to set up the .env file the best. I have things running several ways locally on different repos. For the one that I am running Supabase…I’ve gone about it in two ways with some success.

  1. Run Supabase locally with a shadow database. This spins up Docker and I can Prisma migrate locally fine.
//.env  

// This is the port I spun the Docker up on
DATABASE_URL=postgres://postgres:postgres@localhost:5432/postgres

// Local Supabase after supabase init and supabase start
SUPABASE_URL=http://localhost:8912
SUPABASE_KEY=[given in the terminal on spinup]
SUPABASE_JWT_SECRET=[given in the terminal on spinup]
SHADOW_DATABASE_URL=[Based on spinup]

// Another set of connections for Supabase in "the cloud" right now for production only
# DATABASE_URL=postgresql://postgres:[supaandpass].supabase.co:5432/postgres <-- I assume this is bad
# SUPABASE_URL=https://[generatedbysupa].supabase.co
# SUPABASE_KEY=[from Supabase settings]
# SUPABASE_JWT_SECRET=[from Supabase settings]

This approach worked except when running auth. I don’t believe Supabase is supporting the local dev environment for auth and to be honest I didn’t even try. I would just comment in and out the local vs. “cloud” when pushing changes to production.

  1. The second method was to set up Supabase Auth and run a local Postgres instance in Docker.
//.env

// Based on docker-compose.yml
DATABASE_URL=postgresql://postgres:postgres@localhost:5432/postgres
TEST_DATABASE_URL=postgresql://postgres:postgres@localhost:5438/postgres

// Supabase Cloud - comment out Database here
# DATABASE_URL=postgresql://postgres:[supaandpass].supabase.co:5432/postgres
SUPABASE_URL=https://[generatedbysupa].supabase.co
SUPABASE_KEY=[from Supabase settings]
SUPABASE_JWT_SECRET=[from Supabase settings]

As I move closer to production, I know things need to be set up more sustainably and securely. I haven’t even got to the wonderful solution that you posted nor appended the connections with ?connection_limit=1.

I also made it successfully through the tutorial a few times and deployed it to Netlify. I used Railway but also struggled with the next steps on how to make a clean production, development, and testing environment.

I know each setup is going to vary but am looking for a general config that works for many if that makes sense. Again thank you!

You can have multiple .env files per environment. .env will load by default, but if you make a .env.production that will override any values in there when run with NODE_ENV=production

So I’ll have production endpoints in there and if I need to run a script against those servers I can do:

NODE_ENV=production yarn rw exec scriptName
1 Like

Great thread. Thanks @justinkuntz for setting the context! I have a lot of the same questions. Did you land on an optimal set-up in the end? If so, would love to see it.

Reading the dotenv docs, setting up multiple .env files isn’t recommended so I’ve been reluctant to do so.

Some v basic questions that I have:

  • What’s the best way to manage multiple environments, e.g. dev, test, staging, production? Is having a separate branch for each with unique .env & config files sufficient or recommended?
  • Does NODE_ENV=production need to be specified in the production env or is NODE_ENV assumed to be equal to production if the variable is excluded (or does this depend on the hosting provider?)

Just to expand on @rob, previous answer.

You can have an .env.local file and if you run NODE_ENV=local yarn rw prisma migrate dev, it will run smoothly and use .env.local as a source for all environment variables.

I can confirm that this is the case.

Cheers.

Does it still work like this? I’m not sure if I’m trying something different, but I’ve executed

NODE_ENV=production
yarn rw build
yarn rw serve

Even though the values of .env.production should override the values on .env, when my application opens it’s still using the variables of the .env file. Shouldn’t it load the values of my .env.production file in this case?

If you want specific environment variables set when running a command you need to prefix your command with them each time:

NODE_ENV=production yarn rw build
NODE_ENV=production yarn rw serve

Just running NODE_ENV=production by itself doesn’t really do anything! If you want to set that variable to last the remainder of your session in that terminal window, you could prefix it with export:

export NODE_ENV=production

But as soon as you close your terminal window it’ll be gone (and no other windows will have it set unless you run the same export in those windows as well).

Now that the var is set and available in the environment you could run build and serve without specifying it:

yarn rw build
yarn rw serve

You can also run everything at once with a single command, using && between them:

NODE_ENV=production yarn rw build && yarn rw serve

Hi @rob

I think I’m missing something yet. I’ve tried doing it with a project I’ve created from scratch with the RedwoodJS version 6.6.2 but I failed to make it work with the .env.production file. Let’s say I create a project with

yarn create redwood-app my_project

And in this project I write the following in my .env.defaults file

REDWOOD_ENV_myvar="development value"

As well as a new .env.production file:

REDWOOD_ENV_myvar="production value"

Finally, if I create a home page for it

yarn rw generate page home /

And on this page, I write a console.log(process.env.REDWOOD_ENV_myvar) to check the value of my environment variable… I’ve tried it in multiple ways like:

NODE_ENV=production yarn rw dev

and

export NODE_ENV=production
yarn rw dev

and

NODE_ENV=production yarn rw build
NODE_ENV=production yarn rw serve

and

export NODE_ENV=production
yarn rw build
yarn rw serve

All of them will make my frontend print the value of .env.defaults and ignore my .env.production. Am I missing anything in it?


Particularly while running it with yarn rw dev, I’ve noticed that when I close the process that’s running in the terminal it shows some commands:

NODE_ENV=production yarn rw dev
web |   ➜  Local:   http://localhost:8910/
web |   ➜  Network: http://10.20.0.184:8910/
web |   ➜  Network: http://172.22.0.1:8910/
web |   ➜  Network: http://172.19.0.1:8910/
web |   ➜  Network: http://172.21.0.1:8910/
web |   ➜  Network: http://192.168.176.1:8910/
web |   ➜  Network: http://10.15.2.65:8910/
gen | Generating full TypeScript definitions and GraphQL schemas
gen | Done.
api | Building... Took 223 ms
api | Debugger listening on ws://127.0.0.1:18911/0e3c6964-4edb-4278-aa9d-e8874aa6fdb3
api | For help, see: https://nodejs.org/en/docs/inspector
api | Starting API Server...
api | Loading server config from /home/Dados/Projetos/testes/my_project/api/server.config.js
api |
api | Importing Server Functions...
api | /graphql 237 ms
api | ...Done importing in 237 ms
api | Took 262 ms
api | API listening on http://localhost:8911/
api | GraphQL endpoint at /graphql
api | 18:04:21 🌲 Server listening at http://[::]:8911
^Cgen | yarn rw-gen-watch exited with code SIGINT
web | yarn cross-env NODE_ENV=development rw-vite-dev  exited with code SIGINT
api | yarn cross-env NODE_ENV=development NODE_OPTIONS="--enable-source-maps" yarn nodemon --quiet --watch "/home/Dados/Projetos/testes/my_project/redwood.toml" --exec "yarn rw-api-server-watch --port 8911 --debug-port 18911 | rw-log-formatter" exited with code SIGINT

The last two lines have NODE_ENV=development, it’s like it’s enforcing the value of NODE_ENV to be development

1 Like

WELL… after some research with @dom it turns out Redwood does NOT support multiple .env files!

I could have sworn we even had documentation that spelled out this behavior, but I can’t find it now! (There is a doc about a .env.production when using the Serverless Framework deploy, but that’s not the framework in general.)

That’s the behavior in Rails, which is what we modeled ours on, and I’m pretty sure we talked about enabling that, but I guess we never got past just doing .env and .env.defaults

I’m not sure you can even fix it in your web-side code, I think it would need to go into the framework code. There’s a solution that looks like this:

const result = require("dotenv").config({ path: `.env.${NODE_ENV}` });

But we use dotenv-defaults and I’m not sure if it supports the same syntax.

I don’t think many people have run into this problem because they either deploy somewhere like Netlify or Vercel, where you define your ENV vars as part of their UI and they make them available during build time, or on a deployed server you can just create a .env file which will always be used, regardless of the NODE_ENV (which should be set to production in those environments).

I see, I made some tests here and realized that when I execute yarn rw dev the value of NODE_ENV is always overriden to development in the api side, while yarn rw serve the value of NODE_ENV is always overriden to production, no matter if I try to export it to other value while starting my application. But I’m not sure where I’d put the code you’ve provided…

I thought in a different solution that works as I expected though, all other variables are not overriden when the application starts, so a workaround that works for any file would be:

set -a; source .env.staging; set +a; yarn rw dev
  • set -a: This command automatically exports any variables that are defined after it is run.
  • set +a: This command stops the automatic export of any variables defined after it is run.

This way it works very similar to the original intention of using export NODE_ENV=file, and it solves for both the web and api side…