Simplifying Docker deployment for Redwood app without Docker Compose

I’m trying to deploy my Redwood app as a Docker container, but I’m encountering some challenges with the default setup. Here’s my situation:

  1. I used Redwood’s Docker feature to generate a Dockerfile, but it’s more complex than I need.
  2. The generated setup assumes the use of Docker Compose and runs Postgres in a separate container.
  3. I’m using a managed database service for my DB needs (with connection pooling, automatic backups, scaling, etc.), so I don’t need a Postgres container.
  4. I want a simpler configuration: a single container that runs both the API and Web sides of my Redwood app.

Has anyone successfully created a simplified Docker setup for Redwood that doesn’t rely on Docker Compose or include a Postgres container? Any tips on modifying the existing Dockerfile or examples would be greatly appreciated :pray:

This may not be the answer you want and I don’t use Docker but I have ChatGPT the Dockerfile and also your list of needs and it gave two options … rather lengthy to post here.

One said how not to use compose and the other just set the managed database urls.

I can’t evaluate if either work but if you do have ChatGPT access it’s worth asking as maybe it will point in right direction.

Otherwise perhaps our Docker expert @Josh-Walker-GM could suggest a way to use a managed db instead.

Thanks. Yes, I tried chat gpt, the problem is unfortunately I don’t understand Dockerfile good enough to validate proposed solution to decide whether it’s something I want to use for production deployment. And it didn’t work anyway :man_shrugging:.

Frankly, I think Redwood docs lack a simpler Docker setup that just runs both API and Web sides without the database in a single container. Obviously I don’t have any numbers, but I would anticipate this to be the most common scenario that people want to start with - lower requirements for hosting environment, no risk of messing up DB configuration and losing the data. It was super easy to copy and paste Dockerfile from Next.js examples and run it everywhere I tried - locally, DigitalOcean, Railway, GCP.

Frankly, I think Redwood docs lack a simpler Docker setup that just runs both API and Web sides without the database in a single container. Obviously I don’t have any numbers, but I would anticipate this to be the most common scenario that people want to start with - lower requirements for hosting environment, no risk of messing up DB configuration and losing the data. It was super easy to copy and paste Dockerfile from Next.js examples and run it everywhere I tried - locally, DigitalOcean, Railway

@iam It ws my triage week and am raising this up to the other Core team members who have more Docker experience.

Could you share with us howe you have setup Postgres in Docker in the past?

I haven’t. I used only cloud-managed databases - GCP SQL, DigitalOcean PostgreSQL, Heroku PostgreSQL.

Ok, so your Docker never sets up any envars or connection strings for your app to use when connecting to a cloud db?

To clarify, I don’t have a working Redwood Dockerfile. But my Next.js configs define ARGs for DB connection stings that point to cloud DBs.

Here’s a hasty attempt to create unified image. What I did is here is to merge web_serve and api_serve in a single target and comment out console (so that last target is the one we want to actually serve, we don’t need it without docker-compose anyways).

Full file:
# base
# ----
  FROM node:20-bookworm-slim as base

  RUN corepack enable

  # We tried to make the Dockerfile as lean as possible. In some cases, that means we excluded a dependency your project needs.
  # By far the most common is Python. If you're running into build errors because `python3` isn't available,
  # add `python3 make gcc \` before the `openssl \` line below and in other stages as necessary:
  RUN apt-get update && apt-get install -y \
    openssl \
    && rm -rf /var/lib/apt/lists/*

  USER node
  WORKDIR /home/node/app

  COPY --chown=node:node .yarnrc.yml .
  COPY --chown=node:node package.json .
  COPY --chown=node:node api/package.json api/
  COPY --chown=node:node web/package.json web/
  COPY --chown=node:node yarn.lock .

  RUN mkdir -p /home/node/.yarn/berry/index
  RUN mkdir -p /home/node/.cache

  RUN --mount=type=cache,target=/home/node/.yarn/berry/cache,uid=1000 \
    --mount=type=cache,target=/home/node/.cache,uid=1000 \
    CI=1 yarn install

  COPY --chown=node:node redwood.toml .
  COPY --chown=node:node graphql.config.js .
  COPY --chown=node:node .env.defaults .env.defaults

  # api build
  # ---------
  FROM base as api_build

  # If your api side build relies on build-time environment variables,
  # specify them here as ARGs. (But don't put secrets in your Dockerfile!)
  #
  # ARG MY_BUILD_TIME_ENV_VAR

  COPY --chown=node:node api api
  RUN yarn rw build api

  # web prerender build
  # -------------------
  FROM api_build as web_build_with_prerender

  ARG CLERK_PUBLISHABLE_KEY

  COPY --chown=node:node web web
  RUN yarn rw build web

  # web build
  # ---------
  FROM base as web_build

  ARG CLERK_PUBLISHABLE_KEY

  COPY --chown=node:node web web
  RUN yarn rw build web --no-prerender

  # api serve
  # ---------
  FROM node:20-bookworm-slim as serve

  RUN corepack enable

  RUN apt-get update && apt-get install -y \
    openssl \
    && rm -rf /var/lib/apt/lists/*

  USER node
  WORKDIR /home/node/app

  COPY --chown=node:node .yarnrc.yml .
  COPY --chown=node:node package.json .
  COPY --chown=node:node api/package.json api/
  COPY --chown=node:node web/package.json web/
  COPY --chown=node:node yarn.lock .

  RUN mkdir -p /home/node/.yarn/berry/index
  RUN mkdir -p /home/node/.cache

  RUN --mount=type=cache,target=/home/node/.yarn/berry/cache,uid=1000 \
    --mount=type=cache,target=/home/node/.cache,uid=1000 \
    CI=1 yarn workspaces focus --all --production

  COPY --chown=node:node redwood.toml .
  COPY --chown=node:node graphql.config.js .
  COPY --chown=node:node .env.defaults .env.defaults

  COPY --chown=node:node --from=api_build /home/node/app/api/dist /home/node/app/api/dist
  COPY --chown=node:node --from=api_build /home/node/app/api/db /home/node/app/api/db
  COPY --chown=node:node --from=api_build /home/node/app/node_modules/.prisma /home/node/app/node_modules/.prisma

  COPY --chown=node:node --from=web_build /home/node/app/web/dist /home/node/app/web/dist

  ENV NODE_ENV=production \
  API_PROXY_TARGET=http://localhost:$API_PORT

  # default api serve command
  # ---------
  # If you are using a custom server file, you must use the following
  # command to launch your server instead of the default api-server below.
  # This is important if you intend to configure GraphQL to use Realtime.
  #
  # CMD [ "./api/dist/server.js" ]
  CMD [ "sh", "-c", "node_modules/.bin/rw-server api & node_modules/.bin/rw-web-server --api-proxy-target $API_PROXY_TARGET" ]

  # # console
  # # -------
  # FROM base as console

  # # To add more packages:
  # #
  # # ```
  # # USER root
  # #
  # # RUN apt-get update && apt-get install -y \
  # #     curl
  # #
  # # USER node
  # # ```

  # COPY --chown=node:node api api
  # COPY --chown=node:node web web
  # COPY --chown=node:node scripts scripts

Diff:

diff --git a/Dockerfile b/Dockerfile
index c051185..3c1ccec 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -63,7 +63,7 @@ RUN yarn rw build web --no-prerender
 
   # api serve
   # ---------
-FROM node:20-bookworm-slim as api_serve
+  FROM node:20-bookworm-slim as serve
 
   RUN corepack enable
 
@@ -77,6 +77,7 @@ WORKDIR /home/node/app
   COPY --chown=node:node .yarnrc.yml .
   COPY --chown=node:node package.json .
   COPY --chown=node:node api/package.json api/
+  COPY --chown=node:node web/package.json web/
   COPY --chown=node:node yarn.lock .
 
   RUN mkdir -p /home/node/.yarn/berry/index
@@ -84,7 +85,7 @@ RUN mkdir -p /home/node/.cache
 
   RUN --mount=type=cache,target=/home/node/.yarn/berry/cache,uid=1000 \
     --mount=type=cache,target=/home/node/.cache,uid=1000 \
-  CI=1 yarn workspaces focus api --production
+    CI=1 yarn workspaces focus --all --production
 
   COPY --chown=node:node redwood.toml .
   COPY --chown=node:node graphql.config.js .
@@ -94,7 +95,10 @@ COPY --chown=node:node --from=api_build /home/node/app/api/dist /home/node/app/a
   COPY --chown=node:node --from=api_build /home/node/app/api/db /home/node/app/api/db
   COPY --chown=node:node --from=api_build /home/node/app/node_modules/.prisma /home/node/app/node_modules/.prisma
 
-ENV NODE_ENV=production
+  COPY --chown=node:node --from=web_build /home/node/app/web/dist /home/node/app/web/dist
+
+  ENV NODE_ENV=production \
+  API_PROXY_TARGET=http://localhost:$API_PORT
 
   # default api serve command
   # ---------
@@ -103,56 +107,23 @@ ENV NODE_ENV=production
   # This is important if you intend to configure GraphQL to use Realtime.
   #
   # CMD [ "./api/dist/server.js" ]
-CMD [ "node_modules/.bin/rw-server", "api" ]
-
-# web serve
-# ---------
-FROM node:20-bookworm-slim as web_serve
-
-RUN corepack enable
-
-USER node
-WORKDIR /home/node/app
-
-COPY --chown=node:node .yarnrc.yml .
-COPY --chown=node:node package.json .
-COPY --chown=node:node web/package.json web/
-COPY --chown=node:node yarn.lock .
-
-RUN mkdir -p /home/node/.yarn/berry/index
-RUN mkdir -p /home/node/.cache
-
-RUN --mount=type=cache,target=/home/node/.yarn/berry/cache,uid=1000 \
-  --mount=type=cache,target=/home/node/.cache,uid=1000 \
-  CI=1 yarn workspaces focus web --production
-
-COPY --chown=node:node redwood.toml .
-COPY --chown=node:node graphql.config.js .
-COPY --chown=node:node .env.defaults .env.defaults
-
-COPY --chown=node:node --from=web_build /home/node/app/web/dist /home/node/app/web/dist
-
-ENV NODE_ENV=production \
-  API_PROXY_TARGET=http://api:8911
-
-# We use the shell form here for variable expansion.
-CMD "node_modules/.bin/rw-web-server" "--api-proxy-target" "$API_PROXY_TARGET"
-
-# console
-# -------
-FROM base as console
-
-# To add more packages:
-#
-# ```
-# USER root
-#
-# RUN apt-get update && apt-get install -y \
-#     curl
-#
-# USER node
-# ```
-
-COPY --chown=node:node api api
-COPY --chown=node:node web web
-COPY --chown=node:node scripts scripts
+  CMD [ "sh", "-c", "node_modules/.bin/rw-server api & node_modules/.bin/rw-web-server --api-proxy-target $API_PROXY_TARGET" ]
+
+  # # console
+  # # -------
+  # FROM base as console
+
+  # # To add more packages:
+  # #
+  # # ```
+  # # USER root
+  # #
+  # # RUN apt-get update && apt-get install -y \
+  # #     curl
+  # #
+  # # USER node
+  # # ```
+
+  # COPY --chown=node:node api api
+  # COPY --chown=node:node web web
+  # COPY --chown=node:node scripts scripts

Built with

docker build -t gobbled-up-image -f Dockerfile .

Ran with

docker run -d \
  -p 8910:8910 \
  -p 8911:8911 \
  -e CLERK_PUBLISHABLE_KEY=your-clerk-key \
  -e MY_SECRET_KEY=your-secret-value \
  -e STRIPE_SECRET_KEY=sk_test_key \
  gobbled-up-image

I’m hoping this example gives enough idea to how to deal with different envs (build for web vs runtime for api).

Thank you! I’m traveling till Saturday this week, cannot try it out right now. But I’ll let you know whether it worked for me once I have a chance.

Update

@callingmedic911 This works perfectly, thank you once again! One minor change I did - replaced $API_PORT with hardcoded 8911 value as in the original file.

If someone with deeper Docker expertise could review this Dockerfile, it would be great to have it at least in the How To section of the docs.

Another question that I have is whether we need web_build_with_prerender stage? To the extent of my understanding it doesn’t look like we use artifacts from this stage in other stages.

My routes didn’t have any prerendered routes, so it doesn’t matter, but I guess a better approach would be to always copy over from web_build_with_prerender instead of web_build (without prerender), so if I ever have prerendered routes, those built files will be copied into the final image.

1 Like

More questions come as I’m trying to complete the deployment configuration. What would be the appropriate place to run database migrations? I would assume it should be CMD in the last stage, but this stage doesn’t have prisma binary and other required dependencies like @prisma/engines package. Doesn’t feel like piling up more dependencies in the last stage is the right approach.

I would try to keep deployment steps decoupled from the Docker image. You don’t want to run migrations every time you start the image (restart, crash, or multiple instances of the same image).

Where to run those migrations depends partly on the deployment provider/environment/CI process. The idea is to run the migration once per deploy, regardless of how many times you run the image. You can do it via a CI action or a configuration from your host (for example: Fly’s release_command).

Thank you. I think I generally agree, but on the other hand using last stage’s CMD shouldn’t hurt the deployment process - if there is nothing to migrate it will be instantaneous, and Prisma should be fine with attempting to migrate already migrated database. Additionally it would be much quicker than using a separate hosting provider’s command which will need to install all api dependencies every time.

Surprisingly difficult to complete full setup for a Docker-based deployment.

Railway doesn’t support pre- or post-deployment commands, so I attempted to migrate my app to fly.io. Everything works fine except DB migration. To run migration you need to:

  1. Install API side’s dev dependencies because of this command in the package.json - yarn rw exec seed.
  2. Which also requires unsetting NODE_ENV since it’s set to production in the last stage of Dockerfile.
  3. Copy scripts folder to the last stage since it’s were the seed.ts is located

Looks so inefficient and cumbersome. And defeats optimizations made in Dockerfile. If someone came up with a better way to automate migrations and db seed for the dev environment, please let me know.