Deploying on Coolify

With a lot of recent news Coolify has gained some traction. Coolify is like your own Netlify or Vercel. It handles a lot of (not all) devOps for you.

I have set up my own server and begun deploying with their UI using a private repo with Github Apps. During deployment, I had to choose between nixpack, dockerfile or docker compose. I created my own nixpack.toml but when deploying with it, I encountered ‘Bad Gateway’ and function issues on my deployed site.

So I made the change to Dockerfile, since at least there is some documentation on this experimental feature. Of note is that this project uses the server file. So here’s where I’m at and I think close to deploying successfully with Coolify.

  1. Run yarn rw experimental setup-docker
  2. Not using the compose files for deployment, but rather choose the Dockerfile build option in Coolify
  3. Edit the Dockerfile (path changes and adding .yarn/releases as well as the the server.js command)
# base
# ----
FROM node:20-bookworm-slim as base

RUN corepack enable

# We tried to make the Dockerfile as lean as possible. In some cases, that means we excluded a dependency your project needs.
# By far the most common is Python. If you're running into build errors because `python3` isn't available,
# uncomment the line below here and in other stages as necessary:
RUN apt-get update && apt-get install -y \
    openssl \
    # python3 make gcc \
    && rm -rf /var/lib/apt/lists/*

USER node
WORKDIR /app

COPY --chown=node:node .yarn/releases .yarn/releases
COPY --chown=node:node .yarnrc.yml .
COPY --chown=node:node package.json .
COPY --chown=node:node api/package.json api/
COPY --chown=node:node web/package.json web/
COPY --chown=node:node yarn.lock .

RUN mkdir -p /app/.yarn/berry/index
RUN mkdir -p /app/.cache

RUN --mount=type=cache,target=/app/.yarn/berry/cache,uid=1000 \
    --mount=type=cache,target=/app/.cache,uid=1000 \
    CI=1 yarn install

COPY --chown=node:node redwood.toml .
COPY --chown=node:node graphql.config.js .

# api build
# ---------
FROM base as api_build

# If your api side build relies on build-time environment variables,
# specify them here as ARGs. (But don't put secrets in your Dockerfile!)
#
# ARG MY_BUILD_TIME_ENV_VAR

COPY --chown=node:node api api
RUN yarn rw build api

# web prerender build
# -------------------
FROM api_build as web_build_with_prerender

COPY --chown=node:node web web
RUN yarn rw build web

# web build
# ---------
FROM base as web_build

COPY --chown=node:node web web
RUN yarn rw build web --no-prerender

# api serve
# ---------
FROM node:20-bookworm-slim as api_serve

RUN corepack enable

RUN apt-get update && apt-get install -y \
    openssl \
    # python3 make gcc \
    && rm -rf /var/lib/apt/lists/*

USER node
WORKDIR /app

COPY --chown=node:node .yarnrc.yml .
COPY --chown=node:node package.json .
COPY --chown=node:node api/package.json api/
COPY --chown=node:node yarn.lock .

RUN mkdir -p /app/.yarn/berry/index
RUN mkdir -p /app/.cache

RUN --mount=type=cache,target=/app/.yarn/berry/cache,uid=1000 \
    --mount=type=cache,target=/app/.cache,uid=1000 \
    CI=1 yarn workspaces focus api --production

COPY --chown=node:node redwood.toml .
COPY --chown=node:node graphql.config.js .

COPY --chown=node:node --from=api_build /app/api/dist /app/api/dist
COPY --chown=node:node --from=api_build /app/api/db /app/api/db
COPY --chown=node:node --from=api_build /app/node_modules/.prisma /app/node_modules/.prisma

ENV NODE_ENV=production

# default api serve command
# ---------
# If you are using a custom server file, you must use the following
# command to launch your server instead of the default api-server below.
# This is important if you intend to configure GraphQL to use Realtime.
#
CMD [ "./api/dist/server.js" ]
# CMD [ "node_modules/.bin/rw-server", "api" ]


# web serve
# ---------
FROM node:20-bookworm-slim as web_serve

RUN corepack enable

USER node
WORKDIR /app

COPY --chown=node:node .yarnrc.yml .
COPY --chown=node:node package.json .
COPY --chown=node:node web/package.json web/
COPY --chown=node:node yarn.lock .

RUN mkdir -p /app/.yarn/berry/index
RUN mkdir -p /app/.cache

RUN --mount=type=cache,target=/app/.yarn/berry/cache,uid=1000 \
    --mount=type=cache,target=/app/.cache,uid=1000 \
    CI=1 yarn workspaces focus web --production

COPY --chown=node:node redwood.toml .
COPY --chown=node:node graphql.config.js .

COPY --chown=node:node --from=web_build /app/web/dist /app/web/dist

ENV NODE_ENV=production \
    API_PROXY_TARGET=http://api:8911

# We use the shell form here for variable expansion.
CMD "node_modules/.bin/rw-web-server" "--api-proxy-target" "$API_PROXY_TARGET"

# console
# -------
FROM base as console

# To add more packages:
#
# ```
# USER node
#
# RUN apt-get update && apt-get install -y \
#     curl
#
# USER node
# ```

COPY --chown=node:node api api
COPY --chown=node:node web web
COPY --chown=node:node scripts scripts

  1. Disable the healthcheck in Coolify, since we’re using Docker (apparently you’d need to use curl/wget to make it work but I’m not bothering with that just yet, it’s not necessary for deployment to work)
  2. Add 8911, 8910 under Ports exposed for the app in Coolify
  3. Hit deploy in Coolify

Here is where I encountered some issues with the standard Dockerfile. I had to change the path to /app because when Coolify deploys a project it does not create it under /home. That seemed to fix it and it didn’t throw errors. Then I had issues with yarn install, so I had to copy the /releases folder. Now the deploy logs show everything building but it hangs up on this step (Coolify deploy log output):

[COMMAND] docker exec r8084wk bash -c 'bash /artifacts/build.sh'
[OUTPUT]
#18 59.69 ➤ YN0007: │ @prisma/engines@npm:5.2.0 must be built because it never has been before or the last one failed

[COMMAND] docker exec r8084wk bash -c 'bash /artifacts/build.sh'
[OUTPUT]
#18 59.69 ➤ YN0007: │ core-js@npm:3.37.1 must be built because it never has been before or the last one failed
#18 59.69 ➤ YN0007: │ esbuild@npm:0.21.3 must be built because it never has been before or the last one failed
#18 59.69 ➤ YN0007: │ core-js-pure@npm:3.37.1 must be built because it never has been before or the last one failed
#18 59.70 ➤ YN0007: │ better-sqlite3@npm:8.6.0 must be built because it never has been before or the last one failed
#18 59.70 ➤ YN0007: │ @prisma/engines@npm:4.16.2 must be built because it never has been before or the last one failed
#18 59.70 ➤ YN0007: │ msw@npm:1.3.3 [1ea41] must be built because it never has been before or the last one failed
#18 59.70 ➤ YN0007: │ @prisma/client@npm:5.14.0 [dc82c] must be built because it never has been before or the last one failed
#18 59.70 ➤ YN0007: │ esbuild@npm:0.18.20 must be built because it never has been before or the last one failed
#18 59.70 ➤ YN0007: │ @swc/core@npm:1.6.3 [6b6bb] must be built because it never has been before or the last one failed
#18 59.70 ➤ YN0007: │ @prisma/engines@npm:5.14.0 must be built because it never has been before or the last one failed

[COMMAND] docker exec r8084wk bash -c 'bash /artifacts/build.sh'
[OUTPUT]
#18 67.53 ➤ YN0000: │ @prisma/client@npm:5.14.0 [dc82c] STDERR prisma:warn We could not find your Prisma schema in the default locations (see: https://pris.ly/d/prisma-schema-location.
#18 67.53 ➤ YN0000: │ @prisma/client@npm:5.14.0 [dc82c] STDERR If you have a Prisma schema file in a custom path, you will need to run
#18 67.53 ➤ YN0000: │ @prisma/client@npm:5.14.0 [dc82c] STDERR `prisma generate --schema=./path/to/your/schema.prisma` to generate Prisma Client.

[COMMAND] docker exec r8084wk bash -c 'bash /artifacts/build.sh'
[OUTPUT]
#18 67.53 ➤ YN0000: │ @prisma/client@npm:5.14.0 [dc82c] STDERR If you do not have a Prisma schema file yet, you can ignore this message.
#18 67.53 ➤ YN0000: │ @prisma/client@npm:5.14.0 [dc82c] STDOUT

[COMMAND] docker exec r8084wk bash -c 'bash /artifacts/build.sh'
[OUTPUT]
#18 69.49 ➤ YN0007: │ prisma@npm:5.14.0 must be built because it never has been before or the last one failed

[COMMAND] docker exec r8084wk bash -c 'bash /artifacts/build.sh'
[OUTPUT]
#18 70.03 ➤ YN0000: └ Completed in 37s 493ms

[COMMAND] docker exec r8084wk bash -c 'bash /artifacts/build.sh'
[OUTPUT]
#18 71.04 ➤ YN0000: · Done with warnings in 1m 10s

I looked at the Dockerfile and it’s clearly copying the db folder:

COPY --chown=node:node --from=api_build /app/api/db /app/api/db

I have not deleted my migrations folder prior to deployment or ran migrate dev as a post-deployment task. I think the issue first is why the schema can’t be found? This ultimately breaks my deployment. Also I’m running Coolify as root, but I see the Dockerfile uses the node user. I don’t see an error regarding this but would I need to create this user first?

Any ideas would be highly appreciated.

The node user is provided from the node:20-bookworm-slim image and used as a non-root user. docker-node/docs/BestPractices.md at main · nodejs/docker-node · GitHub.

The .yarn/releases copy shouldn’t be necessary. If you created the project prior to v6.6.0 you may need to follow these instructions.

As for the build error… could you maybe share the build.sh? Which target from the multi-stage dockerfile are you building?

Oh, I see. I must’ve left the .yarn folder intact when as the release upgrade mentions isn’t necessary. I also still have the path in the yarnrc.yml file. Alright, second try here we go. I removed the line about releases in the dockerfile I shared above:

Remove yarnPath in .yarnrc.yml and remove .yarn/releases
--COPY --chown=node:node .yarn/releases .yarn/releases (in the Dockerfile)

Here’s my package.json

{
  "private": true,
  "workspaces": {
    "packages": [
      "api",
      "web"
    ]
  },
  "devDependencies": {
    "@redwoodjs/auth-dbauth-setup": "7.7.1",
    "@redwoodjs/cli-storybook": "7.7.1",
    "@redwoodjs/core": "7.7.1",
    "@redwoodjs/project-config": "7.7.1",
    "@redwoodjs/studio": "11"
  },
  "eslintConfig": {
    "extends": "@redwoodjs/eslint-config",
    "root": true
  },
  "engines": {
    "node": "=20.x",
    "yarn": ">=1.15"
  },
  "prisma": {
    "seed": "yarn rw exec seed"
  },
  "packageManager": "yarn@4.3.0"
}

I have the Github App set up to auto-deploy on Coolify, here’s what my project settings look like in the UI:

For the healthcheck issue I added this line to the dockerfile:

HEALTHCHECK --interval=5s --timeout=3s --start-period=30s --retries=5 CMD wget -qO- http://localhost:8911/graphql/health || exit 1

I also set up a NODE_ENV=production environment variable under the project that’s marked as a build variable. Other than that, there’s no other configuration I can make. So I commit and it starts auto deploying. I can then go to the Deployments log and watch the build process.

Here’s what it looks like right now (abbreviated version):

Preparing container with helper image: ghcr.io/coollabsio/coolify-helper:latest.
[COMMAND] docker rm -f i4wks40
[OUTPUT]
Error response from daemon: No such container: i4wks40
[COMMAND] docker run -d --network coolify --name i4wks40 --rm -v /var/run/docker.sock:/var/run/docker.sock ghcr.io/coollabsio/coolify-helper:latest
[OUTPUT]
<hash>
[COMMAND] docker exec i4wks40 bash -c 'GIT_SSH_COMMAND="ssh -o ConnectTimeout=30 -p 22 -o Port=22 -o LogLevel=ERROR -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" git ls-remote https://x-access-token:<REDACTED>@github.com/test/app.isd.git main'
[OUTPUT]
<hash> refs/heads/main

[COMMAND] docker exec i4wks40 bash -c 'git clone -b "main" https://x-access-token:<REDACTED>@github.com/test/app.isd.git /artifacts/i4wks40 && cd /artifacts/i4wks40 && GIT_SSH_COMMAND="ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" git submodule update --init --recursive && cd /artifacts/i4wks40 && GIT_SSH_COMMAND="ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" git lfs pull'
[OUTPUT]
Cloning into '/artifacts/i4wks40'...
[COMMAND] docker exec i4wks40 bash -c 'cd /artifacts/i4wks40 && git log -1 <hash> --pretty=%B'
[OUTPUT]
0.9.5_dockerUpdates

 Image not found (<internal coolify project code>:<hash>). Building new image.
[COMMAND] docker exec i4wks40 bash -c 'cat /artifacts/i4wks40/Dockerfile'
[OUTPUT]
# base
# ----
FROM node:20-bookworm-slim as base

RUN corepack enable... pastes the whole Dockerfile here

Building docker image started. (This is where the fun begins)

[COMMAND] docker exec i4wks40 bash -c 'bash /artifacts/build.sh'
[OUTPUT]
#0 building with "default" instance using docker driver

#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 3.98kB done
#1 DONE 0.0s

#2 [internal] load metadata for docker.io/library/node:20-bookworm-slim

[COMMAND] docker exec i4wks40 bash -c 'bash /artifacts/build.sh'
[OUTPUT]
#2 DONE 0.2s

[COMMAND] docker exec i4wks40 bash -c 'bash /artifacts/build.sh'
[OUTPUT]
#3 [internal] load .dockerignore
#3 transferring context: 182B done
#3 DONE 0.0s

#4 [base 1/14] FROM docker.io/library/node:20-bookworm-slim@sha256:0ff3b9e24e805e08f2e4f822957d1deee86bb07927c70ba8440de79a6a885da6
#4 DONE 0.0s

#5 [internal] settings cache mount permissions
#5 DONE 0.0s

#6 [base 2/14] RUN corepack enable
#6 CACHED

#7 [base 3/14] RUN apt-get update && apt-get install -y openssl && rm -rf /var/lib/apt/lists/*
#7 CACHED

#8 [base 4/14] WORKDIR /app
#8 CACHED

#9 [internal] load build context

[COMMAND] docker exec i4wks40 bash -c 'bash /artifacts/build.sh'
[OUTPUT]
#9 transferring context: 4.08MB 0.2s done
#9 DONE 0.2s

#5 [internal] settings cache mount permissions
#5 CACHED

#10 [base 5/14] COPY --chown=node:node .yarnrc.yml .
#10 DONE 0.1s

[COMMAND] docker exec i4wks40 bash -c 'bash /artifacts/build.sh'
[OUTPUT]
#11 [base 6/14] COPY --chown=node:node package.json .
#11 DONE 0.0s

... continues with the rest, all DONE (yarn install worked, so you were right!)

[COMMAND] docker exec i4wks40 bash -c 'bash /artifacts/build.sh'
[OUTPUT]
#17 27.25 ➤ YN0013: │ 2400 packages were added to the project (+ 667.29 MiB).

[COMMAND] docker exec i4wks40 bash -c 'bash /artifacts/build.sh'
[OUTPUT]
#17 27.25 ➤ YN0000: └ Completed in 25s 362ms

(!) Now right after this step it goes for prisma and even though it finishes with warnings, maybe this isn't an issue afterall? I remember reading months ago about having to run a migrate dev or removing your migrations folder but I don't think that's the issue's I see.

[COMMAND] docker exec i4wks40 bash -c 'bash /artifacts/build.sh'
[OUTPUT]
#17 54.53 ➤ YN0007: │ @prisma/engines@npm:5.2.0 must be built because it never has been before or the last one failed

[COMMAND] docker exec i4wks40 bash -c 'bash /artifacts/build.sh'
[OUTPUT]
#17 54.53 ➤ YN0007: │ core-js@npm:3.37.1 must be built because it never has been before or the last one failed
#17 54.54 ➤ YN0007: │ esbuild@npm:0.21.3 must be built because it never has been before or the last one failed
#17 54.54 ➤ YN0007: │ core-js-pure@npm:3.37.1 must be built because it never has been before or the last one failed
#17 54.54 ➤ YN0007: │ better-sqlite3@npm:8.6.0 must be built because it never has been before or the last one failed
#17 54.54 ➤ YN0007: │ @prisma/engines@npm:4.16.2 must be built because it never has been before or the last one failed
#17 54.54 ➤ YN0007: │ msw@npm:1.3.3 [1ea41] must be built because it never has been before or the last one failed
#17 54.54 ➤ YN0007: │ @prisma/client@npm:5.14.0 [dc82c] must be built because it never has been before or the last one failed
#17 54.54 ➤ YN0007: │ esbuild@npm:0.18.20 must be built because it never has been before or the last one failed
#17 54.54 ➤ YN0007: │ @swc/core@npm:1.6.3 [6b6bb] must be built because it never has been before or the last one failed
#17 54.54 ➤ YN0007: │ @prisma/engines@npm:5.14.0 must be built because it never has been before or the last one failed

[COMMAND] docker exec i4wks40 bash -c 'bash /artifacts/build.sh'
[OUTPUT]
#17 60.86 ➤ YN0000: │ @prisma/client@npm:5.14.0 [dc82c] STDERR prisma:warn We could not find your Prisma schema in the default locations (see: https://pris.ly/d/prisma-schema-location.
#17 60.86 ➤ YN0000: │ @prisma/client@npm:5.14.0 [dc82c] STDERR If you have a Prisma schema file in a custom path, you will need to run

[COMMAND] docker exec i4wks40 bash -c 'bash /artifacts/build.sh'
[OUTPUT]
#17 60.87 ➤ YN0000: │ @prisma/client@npm:5.14.0 [dc82c] STDERR `prisma generate --schema=./path/to/your/schema.prisma` to generate Prisma Client.
#17 60.87 ➤ YN0000: │ @prisma/client@npm:5.14.0 [dc82c] STDERR If you do not have a Prisma schema file yet, you can ignore this message.
#17 60.87 ➤ YN0000: │ @prisma/client@npm:5.14.0 [dc82c] STDOUT

[COMMAND] docker exec i4wks40 bash -c 'bash /artifacts/build.sh'
[OUTPUT]
#17 61.83 ➤ YN0007: │ prisma@npm:5.14.0 must be built because it never has been before or the last one failed

[COMMAND] docker exec i4wks40 bash -c 'bash /artifacts/build.sh'
[OUTPUT]
#17 62.02 ➤ YN0000: └ Completed in 34s 603ms

[COMMAND] docker exec i4wks40 bash -c 'bash /artifacts/build.sh'
[OUTPUT]
#17 62.31 ➤ YN0000: · Done with warnings in 1m 2s

[COMMAND] docker exec i4wks40 bash -c 'bash /artifacts/build.sh'
[OUTPUT]
#17 DONE 63.0s

[COMMAND] docker exec i4wks40 bash -c 'bash /artifacts/build.sh'
[OUTPUT]
#23 writing image sha256:<hash> done
#23 naming to docker.io/library/<hash> done
#23 DONE 32.0s

Building docker image completed.
Rolling update started.

[COMMAND] docker exec i4wks40 bash -c 'SOURCE_COMMIT=hash
COOLIFY_FQDN=https://app.test.app COOLIFY_URL=app.test.app COOLIFY_BRANCH=main docker compose --project-directory /artifacts/i4wks40 -f /artifacts/i4wks40/docker-compose.yml up --build -d'
[OUTPUT]
Container id-hash Creating

[COMMAND] docker exec i4wks40 bash -c 'SOURCE_COMMIT=hash
COOLIFY_FQDN=https://app.test.app COOLIFY_URL=app.test.app
COOLIFY_BRANCH=main docker compose --project-directory /artifacts/i4wks40 -f /artifacts/i4wks40/docker-compose.yml up --build -d'
[OUTPUT]
id-hash Your kernel does not support memory swappiness capabilities or the cgroup is not mounted. Memory swappiness discarded.

[COMMAND] docker exec i4wks40 bash -c 'SOURCE_COMMIT=hash
COOLIFY_FQDN=https://app.test.app COOLIFY_URL=app.test.app COOLIFY_BRANCH=main docker compose --project-directory /artifacts/i4wks40 -f /artifacts/i4wks40/docker-compose.yml up --build -d'
[OUTPUT]
Container id-hash Created
Container id-hash Starting

[COMMAND] docker exec i4wks40 bash -c 'SOURCE_COMMIT=hash
COOLIFY_FQDN=https://test.app COOLIFY_URL=app.test.app
COOLIFY_BRANCH=main docker compose --project-directory /artifacts/i4wks40 -f /artifacts/i4wks40/docker-compose.yml up --build -d'
[OUTPUT]
Container id-hash Started

Waiting for healthcheck to pass on the new container.
Waiting for the start period (30 seconds) before starting healthcheck.
Attempt 1 of 5 | Healthcheck status: "starting"

[COMMAND] docker inspect --format='{{json .State.Health.Status}}' id-hash
[OUTPUT]
"unhealthy"

[COMMAND] docker inspect --format='{{json .State.Health.Log}}' id-hash
[OUTPUT]
[]

Attempt 2 of 5 | Healthcheck status: "unhealthy"

Removing old containers.
WARNING: Dockerfile or Docker Image based deployment detected. The healthcheck needs a curl or wget command to check the health of the application. Please make sure that it is available in the image or turn off healthcheck on Coolify's UI.
New container is not healthy, rolling back to the old container.
Rolling update completed.

And that’s it. The app just hangs on “Restarting (unhealthy)” with the last print in the deploy.

Now are you saying I could use a --target parameter for Dockerfile on the Coolify UI where it says (see screenshot above) Custom Docker Options? So that it only deploys the /api or /web? That would be fantastic, I didn’t realize that was an option honestly. I’d prefer to build the /web as a static and /api as a web service essentially.

I can’t grab the build.sh because if I want to run commands on the project image I get:

Error response from daemon: Container e0270bb0d5e6afbf3838c0c2ae805ac035d065c75c7a00a8ab34ac3ef191524e is restarting, wait until the container is running

Makes debugging a bit difficult :confused:

In regards to build targets, I tried redeploying and setting “api_build” in the Docker Build Stage Target. I got past the prisma errors but my log looks like this now:

...
[COMMAND] docker exec oc4o0kg bash -c 'bash /artifacts/build.sh'
[OUTPUT]
#21 8.598 ❯ Generating Prisma Client...

[COMMAND] docker exec oc4o0kg bash -c 'bash /artifacts/build.sh'
[OUTPUT]
#21 12.52 ✔ Generating Prisma Client...

[COMMAND] docker exec oc4o0kg bash -c 'bash /artifacts/build.sh'
[OUTPUT]
#21 12.52 ❯ Generating types needed for GraphQL Fragments support...

[COMMAND] docker exec oc4o0kg bash -c 'bash /artifacts/build.sh'
[OUTPUT]
#21 29.66 ✖ Generating types needed for GraphQL Fragments support... [FAILED: The "path" argument must be of type string or an instance of Buffer or URL. Received null]

TypeError [ERR_INVALID_ARG_TYPE]: The "path" argument must be of type string or an instance of Buffer or URL. Received null
#21 29.66 at Object.readFileSync (node:fs:446:42)
#21 29.66 at fileToAst (/app/node_modules/@redwoodjs/internal/dist/ast.js:22:28)
#21 29.66 at generateTypeDefRouterRoutes (/app/node_modules/@redwoodjs/internal/dist/generate/typeDefinitions.js:202:34)
#21 29.66 at generateTypeDefs (/app/node_modules/@redwoodjs/internal/dist/generate/typeDefinitions.js:61:157)
#21 29.66 at async generate (/app/node_modules/@redwoodjs/internal/dist/generate/generate.js:27:7)
#21 29.66 at async _Task.task [as taskFn] (/app/node_modules/@redwoodjs/cli/dist/commands/buildHandler.js:109:9)
#21 29.66 at async _Task.run (/app/node_modules/listr2/dist/index.cjs:2049:11)
#21 29.66
#21 29.66
#21 29.66 Need help?
#21 29.66 - Not sure about something or need advice? Reach out on our Forum (​https://community.redwoodjs.com/​)
#21 29.66 - Think you've found a bug? Open an issue on our GitHub (​https://github.com/redwoodjs/redwood​)

[COMMAND] docker exec oc4o0kg bash -c 'bash /artifacts/build.sh'
[OUTPUT]
#21 ERROR: process "/bin/sh -c yarn rw build api" did not complete successfully: exit code: 1

Hmm, no bueno. Even if I disable fragments, the deployment stops at:

...
[2024-Jun-20 23:55:16.277452] New container started.
[2024-Jun-20 23:55:16.282404] Custom healthcheck found, skipping default healthcheck.
[2024-Jun-20 23:55:16.286871] Waiting for healthcheck to pass on the new container.
[2024-Jun-20 23:55:16.290528] Healthcheck URL (inside the container): GET: http://localhost:8911/graphql/health
[2024-Jun-20 23:55:16.294260] Waiting for the start period (30 seconds) before starting healthcheck.
[2024-Jun-20 23:55:46.440906]

[COMMAND] docker inspect --format='{{json .State.Health.Status}}' id-hash
[OUTPUT]
"unhealthy"

[2024-Jun-20 23:55:46.613892]

[COMMAND] docker inspect --format='{{json .State.Health.Log}}' id-hash
[OUTPUT]
[]

[2024-Jun-20 23:55:46.619141] Attempt 1 of 5 | Healthcheck status: "unhealthy"
[2024-Jun-20 23:55:46.627301] ----------------------------------------
[2024-Jun-20 23:55:46.632441] Container logs:
[2024-Jun-20 23:55:46.762859] ----------------------------------------
[2024-Jun-20 23:55:46.767296] Removing old containers.
[2024-Jun-20 23:55:46.771696] ----------------------------------------
[2024-Jun-20 23:55:46.775315] WARNING: Dockerfile or Docker Image based deployment detected. The healthcheck needs a curl or wget command to check the health of the application. Please make sure that it is available in the image or turn off healthcheck on Coolify's UI.
[2024-Jun-20 23:55:46.778734] ----------------------------------------
[2024-Jun-20 23:55:46.782808] New container is not healthy, rolling back to the old container.
[2024-Jun-20 23:55:46.924383] Rolling update completed.

I’m still wondering why api_build manages to get the prisma client generated and all, but if I don’t specify any build target it says prisma was never generated and the above warnings.

Oh heck yeah, I got it deployed–somewhat!! So I tried with the nixpacks and Dockerfile methods. None of them worked for me (perhaps I used the wrong configs).

HOWEVER, if you choose Docker Compose in Coolify you can specify the following and Redwoodjs will build successfully on Coolify (Only took about 20s to build lol?). Notice how the UI gives you two subdomains to specify for the api and web servers.

I didn’t edit anything in the Dockerfile posted above.
docker-compose.prod.yml

version: "3.8"

services:
  api:
    build:
      context: .
      dockerfile: Dockerfile
      target: api_serve
    # Without a command specified, the Dockerfile's api_serve CMD will be used.
    # If you are using a custom server file, you should either use the following
    # command to launch your server or update the Dockerfile to do so.
    # This is important if you intend to configure GraphQL to use Realtime.
    # command: "./api/dist/server.js"
    ports:
      - "8911:8911"
    healthcheck:
      test: ["CMD", "wget -qO- http://localhost:8911/graphql/health || exit 1"]
      interval: 10s
      timeout: 5s
      retries: 5

  web:
    build:
      context: .
      dockerfile: Dockerfile
      target: web_serve
    ports:
      - "8910:8910"
    depends_on:
      - api
    environment:
      - API_PROXY_TARGET=http://localhost:8911

My deploy logs don’t finish and it justs hangs on the servers listening:

I wonder if that’s when I add a post-deployment command as documented in the yml file:

docker compose -f ./docker-compose.prod.yml run --rm -it console /bin/bash

Anyway, I think right now there’s a network issue with the proxy, because it tells me the servers are listening on http://0.0.0.0:8911 and 8910 but in my Dockerfile and compose yml I set it as http://localhost:8911 and 8910. I remember that 0.0.0.0 is a loopback and localhost isn’t. Something to investigate, because when I got to my static web through https domain I see my redwood login page but I get:

connect ECONNREFUSED 127.0.0.1:8911

Not completely clean but I think I know where to start troubleshooting tomorrow. At least I can now reach my website on both the API (https://api.test.app/graphql) and WEB (https://web.test.app/login) subdomains.

It should be the box above next to the dockerfile location which is empty. I mainly asked about the build.sh to see which part of the dockerfile gets built. I’m assuming it picks the ‘base’ target if you don’t specifiy any. the base target isn’t really meant to be deployed.

anyway I wondered if the docker-compose option wouldn’t be better. glad to hear you got it working! :grinning:

As for the proxy error. If you deploy the api on a different domain you usually need to add the API_PROXY_TARGET environment variable when starting the web server.

node_modules/.bin/rw-web-server --api-proxy-target $API_PROXY_TARGET

The api-proxy-target isn’t really well documented and it doesn’t help that the name of the argument changed multiple times. :sweat_smile:
The docs only mention it under the old name here (Command Line Interface | RedwoodJS Docs) as --apiHost arg and not with the correct name.
redwood/packages/cli/src/commands/serveBothHandler.js at 4637f61d5e6aeb907d4a17217ab643cfb4d4ebe4 · redwoodjs/redwood · GitHub

Yes, I think I have most everything working now. Health check, log doesn’t hang anymore and I get a nice split for my web and api with seperate containers. The docker-compose works and I’ll be posting another thread detailing this as a guide.

My last issue comes down to Redwood or my implementation of it.

I found a bug where you can’t build it with graphql fragments. It spits out the error I posted in a log above:

TypeError [ERR_INVALID_ARG_TYPE]: The "path" argument must be of type string or an instance of Buffer or URL. Received null

Then, whenever I go to my web.appname and try to go through the dbAuth login process I get:

80EC2E1EBC7F0000:error:0A00010B:SSL routines:ssl3_get_record:wrong version number:../deps/openssl/openssl/ssl/record/ssl3_record.c:354:

Now I noticed we install openssl with Docker. But I’m at a loss how to debug this, however it’s probably the solution.

RUN apt-get update && apt-get install -y \
    openssl \ <-------------
    && rm -rf /var/lib/apt/lists/*

Alternatively, googling yields that number of things can cause it. I wonder if my api is built correctly? I can look at the contents of the api container and I see:

drwxr-xr-x 1 node node   4096 Jun 21 14:08 api
drwxr-xr-x 1 node node   4096 Jun 21 14:08 node_modules
-rw-r--r-- 1 node node    544 Jun 21 14:02 package.json
-rw-r--r-- 1 node node    789 Jun 21 14:07 redwood.toml
-rw-r--r-- 1 node node 969837 Jun 21 14:07 yarn.lock

And under api I see:

drwxr-xr-x 3 node node 4096 Jun 21 14:02 db
drwxr-xr-x 8 node node 4096 Jun 21 14:03 dist
-rw-r--r-- 1 node node  368 Jun 21 14:02 package.json

My schema.prisma and migrations folder is in there. At no point in the deployment process do I call migrate dev, should I? I feel like I need to clean out the migrations folder and run a migration. Would this be a post-deployment command?

I haven’t used the gql fragments since they’ve been introduced and the ssl error is beyond my knowledge.

But the migration is possible. Although it is up to you if you run it with the other containers or if you consider this is an admin task not related to the deployment.

we were confident that we won’t run into any migrations issues after our staging environment and let them run as an initContainer (Init Containers | Kubernetes) from the console target (because you can’t run the cli from the api_serve and web_serve containers.

so we’ve added this last line here in the dockerfile:

# console
# ------------------------------------------------
FROM base as console

COPY --chown=node:node api api
COPY --chown=node:node web web
COPY --chown=node:node scripts scripts

RUN yarn redwood prisma generate

theoretically you could do without it but the prisma generate commands downloads certain files so it wouldn’t be self contained.

and here’s the docker-compose equivalent for the init container:

  init:
    build:
      context: .
      dockerfile: ./Dockerfile
      target: console
    command: >
      sh -c "yarn redwood prisma migrate deploy"
    depends_on:
      db:
        condition: service_healthy
    environment:
      - DATABASE_URL=postgresql://redwood:redwood@db:5432/redwood

  api:
    build:
      context: .
      dockerfile: ./Dockerfile
      target: api_serve
    ports:
      - "8911:8911"
    depends_on:
      - init
    environment:
      - DATABASE_URL=postgresql://redwood:redwood@db:5432/redwood
      - NODE_ENV=production

Very cool, thank you. I will use that once everything else seems good to experiment with prisma.

I noticed something and I don’t know if it’s related to Coolify or docker-compose but I have looked at the compose v2 docs and I believe I am doing this correct. My deployment fails to build api or web if in the yaml file they have a healthcheck.

    healthcheck:
      test: ["CMD", "curl --fail http://localhost:8911/graphql/health || exit 1"]
      interval: 20s
      timeout: 10s
      retries: 3
      start_period: 30s

I tried that and

      test: curl --fail http://localhost:8911/graphql/health || exit 1

Or with wget

      test: wget -qO- http://localhost:8911/graphql/health || exit 1

And differences of using https or using the domain instead of localhost. No matter what, when this healtcheck is attached to api or web sides I get 404 for that final deployed domain.

I am also just wondering if the whole build and deploy should take only 20 seconds? I use the compose up start command but nothing in the build command on Coolify. I see the yaml targets api_serve and web_serve, but even though I see in the deploy logs the yarn rw build api/web commands are cached (tells me it’s running the Dockerfile), I am not sure if it’s actually executing them.

Alright so this seems to have cleaned it up. I am using the following compose file and I have put the HEALTCHECK into the Dockerfile at the end after console. However, I do not think the healtcheck works as I do not see it in the deploy logs and Coolify’s health variable remains with a warning:

`Unhealthy state. This doesn't mean that the resource is malfunctioning.

- If the resource is accessible, it indicates that no health check is configured - it is not mandatory.
- If the resource is not accessible (returning 404 or 503), it may indicate that a health check is needed and has not passed. Your action is required.

Here is the file I used and I can now 100% confirm my api and web works with dbAuth login, etc. My website is fully online. The ssl version error was happening because my api_proxy_target was using https.

version: "3.8"

services:
  api:
    build:
      context: .
      dockerfile: ./Dockerfile
      target: api_serve
    ports:
      - "8911:8911"
    environment:
        - ...removed

  web:
    build:
      context: .
      dockerfile: ./Dockerfile
      target: web_serve
    ports:
      - "8910:8910"
    depends_on:
      - api
    environment:
      - NODE_ENV=production
      - API_PROXY_TARGET=http://localhost:8911

Is there anything I am missing in the yaml file? As you mentioned I do not even need to generate the prisma client actually, it just works all out of the box. And again the whole build process took like 20s, which would usually take a minute or more on netlify before I made this switch to serverful and Coolify.

One last addendum, I think I understand why I was receiving the graphql fragments error during deployment. The fact is if I inspect my web directory in the container I don’t see fragments at all. Looks something like this

-rw-r--r-- 1 node node     476 Jun 21 16:20 200.html
drwxr-xr-x 2 node node    4096 Jun 21 16:20 assets
-rw-r--r-- 1 node node   13159 Jun 21 16:20 build-manifest.json
-rw-r--r-- 1 node node    1741 Jun 21 16:20 favicon.png
drwxr-xr-x 2 node node    4096 Jun 21 16:20 fonts
-rw-r--r-- 1 node node     476 Jun 21 16:20 index.html
drwxr-xr-x 2 node node    4096 Jun 21 16:20 loaders
-rw-r--r-- 1 node node      24 Jun 21 16:20 robots.txt

Naturally, nowhere in the Dockerfile do I chown fragments, so I think I’m going to tinker with that and see what’s up. Interestingly enough though my cells that use fragments work on my deployed site, even with fragments = false in the redwood.toml. I wonder what magic that is, lol. Probably through the graphql.config.js where possibleTypes are generated? That’s part of my App.tsx

Could I ask your opinion on the following? I tried setting up the whole environment now and I got Redis to work. I don’t do any tasks for Redis during deployment but I’m having trouble getting Postgres connected and the prisma migration through.

I saw that init containers exist for docker compose but they aren’t even documented, lol. When I try to use them in my compose Coolify creates its own service container which isn’t something I want for my service stack. There shouldn’t be an init that just continously creates logs and isn’t actually relevant to production.
I had something like this:

init-db:
    image: node:20-bookworm-slim
    command: sh -c 'yarn rw prisma migrate deploy'
    environment:
      - YARN_VERSION=4.3.0
      - NODE_ENV=production
      - DATABASE_URL=postgres://postgres:password@docker-container:5432/postgres
    depends_on:
      api:
        condition: service_healthy

Sadly it returned a bunch of errors in the init-db container logs:

yarn run v1.22.22
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
error Couldn't find a package.json file in "/"

So I decided to try it with the Dockerfile and I added the following without changing the compose file at all:

# api build
# ---------
FROM base as api_build

COPY --chown=node:node api api
COPY --chown=node:node scripts/seed.ts scripts/seed.ts
RUN yarn rw build api

ENV DATABASE_URL=postgres://postgres:password@postgres-c4gkckk:5432/postgres

RUN yarn rw prisma migrate deploy
RUN yarn rw prisma db seed

Now I know you said you can’t run the cli from api_serve, but my deployment seems to run the commands it just can’t reach the postgres database container:

[COMMAND] docker exec l44ggwk bash -c 'SOURCE_COMMIT=<hash> COOLIFY_BRANCH=main docker compose --env-file /artifacts/l44ggwk/.env --project-directory /artifacts/l44ggwk -f /artifacts/l44ggwk/docker-compose.prod.yml build'
[OUTPUT]
#23 2.460 Running Prisma CLI...
#23 2.460 $ yarn prisma migrate deploy --schema /app/api/db/schema.prisma
#23 2.460

[COMMAND] docker exec l44ggwk bash -c 'SOURCE_COMMIT=<hash> COOLIFY_BRANCH=main docker compose --env-file /artifacts/l44ggwk/.env --project-directory /artifacts/l44ggwk -f /artifacts/l44ggwk/docker-compose.prod.yml build'
[OUTPUT]
#23 3.461 Prisma schema loaded from api/db/schema.prisma

[COMMAND] docker exec l44ggwk bash -c 'SOURCE_COMMIT=<hash> COOLIFY_BRANCH=main docker compose --env-file /artifacts/l44ggwk/.env --project-directory /artifacts/l44ggwk -f /artifacts/l44ggwk/docker-compose.prod.yml build'
[OUTPUT]
#23 3.501 Datasource "db": PostgreSQL database "postgres", schema "public" at "postgres-c4gkckk:5432"
#23 3.562

[COMMAND] docker exec l44ggwk bash -c 'SOURCE_COMMIT=<hash> COOLIFY_BRANCH=main docker compose --env-file /artifacts/l44ggwk/.env --project-directory /artifacts/l44ggwk -f /artifacts/l44ggwk/docker-compose.prod.yml build'
[OUTPUT]
#23 3.564 Error: P1001: Can't reach database server at `postgres-c4gkckk:5432`
#23 3.564
#23 3.564 Please make sure your database server is running at `postgres-c4gkckk:5432`.
#23 ERROR: process "/bin/sh -c yarn rw prisma migrate deploy" did not complete successfully: exit code: 1

[COMMAND] docker exec l44ggwk bash -c 'SOURCE_COMMIT=<hash> COOLIFY_BRANCH=main docker compose --env-file /artifacts/l44ggwk/.env --project-directory /artifacts/l44ggwk -f /artifacts/l44ggwk/docker-compose.prod.yml build'
[OUTPUT]
------
> [api api_build 4/5] RUN yarn rw prisma migrate deploy:
2.458
2.460 Running Prisma CLI...
2.460 $ yarn prisma migrate deploy --schema /app/api/db/schema.prisma
2.460
3.461 Prisma schema loaded from api/db/schema.prisma
3.501 Datasource "db": PostgreSQL database "postgres", schema "public" at "postgres-c4gkckk:5432"
3.562
3.564 Error: P1001: Can't reach database server at `postgres-c4gkckk:5432`
3.564
3.564 Please make sure your database server is running at `postgres-c4gkckk:5432`.
------

I tried substituting the IP of the container for the container name but that didn’t work either. I should mention that I am mapping the ports to the host system with 3000:5432 because Coolify already has an instance of postgres for their own server on 5432. I tried the connection string with both 5432 and 3000 as the port.

What am I missing here? I have a very similar setup for my redis container and once my api is built it can connect to redis no problem. Using docker exec on the container I can see through psql the database is running and even nmap shows me the port is listening. At the very top of my deployment I can see that coolify is using its network to set things up with their helper image (Coolify adds some environment variables to the docker-compose so it can internally manage the container).

[COMMAND] docker run -d --network coolify --name l44ggwk --rm -v /var/run/docker.sock:/var/run/docker.sock ghcr.io/coollabsio/coolify-helper:latest

I tried using nmap during the base build and indeed it can’t reach the postgres container (I also tried it on my redis container and indeed it can’t reach the redis container from here either):

[COMMAND] docker exec y0gsc8k bash -c 'SOURCE_COMMIT=hash COOLIFY_BRANCH=main docker compose --env-file /artifacts/y0gsc8k/.env --project-directory /artifacts/y0gsc8k -f /artifacts/y0gsc8k/docker-compose.prod.yml build'
[OUTPUT]
#10 [api base 5/16] RUN nmap 172.18.0.8

[COMMAND] docker exec y0gsc8k bash -c 'SOURCE_COMMIT=hash COOLIFY_BRANCH=main docker compose --env-file /artifacts/y0gsc8k/.env --project-directory /artifacts/y0gsc8k -f /artifacts/y0gsc8k/docker-compose.prod.yml build'
[OUTPUT]
#10 0.272 Starting Nmap 7.93 ( https://nmap.org ) at 2024-06-23 15:50 UTC

[COMMAND] docker exec y0gsc8k bash -c 'SOURCE_COMMIT=hashCOOLIFY_BRANCH=main docker compose --env-file /artifacts/y0gsc8k/.env --project-directory /artifacts/y0gsc8k -f /artifacts/y0gsc8k/docker-compose.prod.yml build'
[OUTPUT]
#10 3.333 Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn
#10 3.333 Nmap done: 1 IP address (0 hosts up) scanned in 3.09 seconds

I can clearly see my container running in docker ps -a:

19fxa34b5fa   postgres:16-alpine                          "docker-entrypoint.s…"   37 minutes ago      Up 37 minutes (healthy)         0.0.0.0:3000->5432/tcp, :::3000->5432/tcp                                                                             postgres-c4gkckk

Docker inspect shows me that the container is using the coolify network config:

            "Ports": {
                "5432/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "3000"
                    },
                    {
                        "HostIp": "::",
                        "HostPort": "3000"
                    }
                ]
            },
 "coolify": {
                    "IPAMConfig": {},
                    "Links": null,
                    "Aliases": [
                        "postgres-c4gkckk"
                    ],
                    "MacAddress": "....",
                    "NetworkID": "...",
                    "EndpointID": "...",
                    "Gateway": "172.18.0.1",
                    "IPAddress": "172.18.0.8",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "DriverOpts": {},
                    "DNSNames": [
                        "postgres-c4gkckk",
                        "19fxa34b5fa"
                    ]
                }

I’m at a loss for why I can’t reach it. Clearly once it’s deployed it’s able to reach the container just fine, as I’ve tested with redis and ultimately the api container doesn’t complain about not being able to connect to postgres once it’s running. The coolify docker helper image I mentioned earlier has the following network config:

         "Networks": {
                "coolify": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "MacAddress": "...removed",
                    "NetworkID": "...removed",
                    "EndpointID": "...removed",
                    "Gateway": "172.18.0.1",
                    "IPAddress": "172.18.0.9",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "DriverOpts": null,
                    "DNSNames": [
                        "jc0oo4k",
                        "f0a31fc9a2a5"
                    ]
                }

It’s on the same Gateway. It should be able to reach the other containers, no?

// EDIT: Upon digging through Coolify’s issues it seems this is a known bug. There’s a fix in the work for that, when using the internal docker hostnames. Will report back when it works! Sorry for the long post :sweat_smile:

should be

  init-db:
    build:
      context: .
      dockerfile: ./Dockerfile
      target: console
    command: >
      sh -c "yarn redwood prisma migrate deploy"

instead.

… and I’m unsure about the database connection. maybe check if the database with said name does even exist? the prisma migration cli only runs migrations. it doesn’t actually create a database. so if you changed the connection string and or default from postgres you may have run into that.

1 Like

Yeah, I was trying several different iterations but ultimately the UI creates a container for that init build and it’s not something I want. There has to be a more elegant solution, hence why I focused on the Dockerfile. From what I’ve read on other deployment solutions, the build process should have access to the other containers as long as it’s using the Docker network. Like I said, I can’t even ping or nmap the other containers by hostname or IP during the Dockerfile build process. Once it’s deployed the connections work so it seems to be an issue related to how Coolify boots the helper image and sets up the networking during build time. The issue is open on their github, hopefully the one man dream team will figure it out :joy:

When you say

the prisma migration cli only runs migrations. it doesn’t actually create a database. so if you changed the connection string and or default from postgres you may have run into that.

Are you implying migrations deploy doesn’t actually create the tables? I thought it did, whenever I shoot migrate dev it builds the SQL to insert into the database and create tables, fields etc. My connection string is:

postgres://postgres:password@postgres-c4gkckk:5432/postgres

Where postgres-c4gkckk is the docker container’s hostname and dns alias. I definitely made sure the container runs healthy and I can connect with psql and like I said, after deployment the api can connect to the database with that string. It’s sadly just during build time :frowning_face: I’ll report back once Coolify updates the github issue. For now if anyone’s trying to set up a postgres database and trying to seed it with Redwood be aware of this limitation.

It creates the tables but not the database itself.

postgres-c4gkckk is probably invalid. it sounds like the actual name of the container.
in docker networking you should use the name of the service instead. Bridge network driver | Docker Docs

so for a docker-compose like this:

services:
  db:
    image: postgres:16-bookworm
    environment:
      POSTGRES_USER: redwood
      POSTGRES_PASSWORD: redwood
      POSTGRES_DB: redwood
    ports:
      - "5432:5432"

  redwood:
    build:
      context: .
      dockerfile: ./Dockerfile
      target: base
    command: >
      sh -c "yarn redwood dev"
    volumes:
      - .:/home/node/app
    ports:
      - "8910:8910"
    depends_on:
     - db
    environment:
      - DATABASE_URL=postgresql://redwood:redwood@db:5432/redwood

the dns name for the container networking is db

postgresql://redwood:redwood@db:5432/redwood

even if the running container name turns out to be db-1 or db-2. eg. if you’re using replicas.

I’m guessing that you can simply remove the c4gkckk part and it should work.

postgres://postgres:password@postgres:5432/postgres

or maybe the database and the deployed api aren’t on the same network.
dns resolution only works if they are. I have no idea which network is used during build time. I’m guessing that it’s the the default docker or the host network.

1 Like

Good points, I did some more testing. I ran ifconfig during the deployment and I’m getting:

#9 [api base 4/15] RUN ifconfig
#9 0.297 eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
#9 0.297 inet 172.17.0.2 netmask 255.255.0.0 broadcast 172.17.255.255
#9 0.297 ether [mac] txqueuelen 0 (Ethernet)
#9 0.297 RX packets 4 bytes 370 (370.0 B)
#9 0.297 RX errors 0 dropped 0 overruns 0 frame 0
#9 0.297 TX packets 0 bytes 0 (0.0 B)
#9 0.297 TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

If you look at the IP of the deployment image helper Coolify is using it’s clearly a different subnet, hence none of the dns resolutions work as you suggested. All my containers are on the “coolify network” which is 172.18.x.x

Seems like he just needs to ensure the helper image network is the same as the coolify network so you can reach other resources during deployment :thinking: I was hoping by adding the “network” config to my docker compose I could circumvent that but it’s only applied after deployment. The issue seems to lie in the helper image Coolify is using.

# These lines only affect the final docker image after built, they apply the same network as "predefined networks" in the UI from what I can gather, but alas only AFTER successful creation of the docker image
    networks:
      coolify: null

networks:
  coolify:
    name: coolify
    external: true

I think my next research topic is how to connect docker subnets either in Docker or Traefik (the proxy Coolify uses). I think instead of waiting for Coolify to fix this (if they do) it seems like an easier route to just add the necessary config myself to allow 172.17.x.x and 172.18.x.x devices to communicate. Then I can finally migrate and seed my database :joy:

//EDIT:
Okay, I think I’ve finally narrowed it down. My suspicion that it was the Coolify helper image is not the case. The problem occurs right after when my docker compose is pulled. At that time the docker compose build is using the “bridge” network. You can see this using docker network ls. There’s a coolify and bridge network:

coolify:

        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1"
                }
            ]
        },

bridge:

        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },

This is the next step right after the Coolify helper image loads:

[COMMAND] docker exec ak8ksgg bash -c 'SOURCE_COMMIT=hash COOLIFY_BRANCH=main docker compose --env-file /artifacts/ak8ksgg/.env --project-directory /artifacts/ak8ksgg -f /artifacts/ak8ksgg/docker-compose.prod.yml build'
[OUTPUT]
#0 building with "default" instance using docker driver

#1 [api internal] load build definition from Dockerfile
...
#2 [api internal] load metadata for docker.io/library/node:20-bookworm-slim
...
[COMMAND] docker exec ak8ksgg bash -c 'SOURCE_COMMIT=hashCOOLIFY_BRANCH=main docker compose --env-file /artifacts/ak8ksgg/.env --project-directory /artifacts/ak8ksgg -f /artifacts/ak8ksgg/docker-compose.prod.yml build'

So here is where the configuration for networks in my docker-compose.yaml is not being set, isntead it’s using the bridge network. The container shows up in the bridge network during deployment:

        "Containers": {
            "je6k9y2tia6v4au7fq15a9okp": {
                "Name": "je6k9y2tia6v4au7fq15a9okp",
                "EndpointID": "hash",
                "MacAddress": "mac",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            }
        },

It dawned on me. If I can’t set the build time network for my app with docker-compose, why don’t I just add the postgres container to the bridge network? SUCCESS.

docker network connect bridge <postgres container>

Now I can finally connect to the database during deployment. It’s a workaround but it really works! :joy: Now onto the next issue. My log now looks like this:

... yarn prisma migrate deploy --schema /app/api/db/schema.prisma
#23 2.952 Datasource "db": PostgreSQL database "postgres", schema "public" at "172.17.0.2:5432" <--- Notice the bridge subnet IP! I used this to show explicitly without using dns that it works
#23 3.046
#23 3.046 No migration found in prisma/migrations <---- I think that's what you meant with your comments above

And of course my seed fails since the tables aren’t created:

#24 [api api_build 5/5] RUN yarn rw prisma db seed
#24 2.144 Running Prisma CLI...
#24 2.144 $ yarn prisma db seed
#24 3.248 Running seed command `yarn rw exec seed` ...
#24 6.899 [COMPLETED] Generating Prisma client
#24 6.900 [STARTED] Running script


[OUTPUT]
#24 9.103 {"level":50,"time":1719247374324,"pid":102,"hostname":"buildkitsandbox","prisma":{"clientVersion":"5.14.0"},"message":"\nInvalid `db.role.count()` invocation in\n/app/scripts/seed.ts:775:24\n\n 772 },\n 773 ]\n 774 \n→ 775 if ((await db.role.count(\nThe table `public.Role` does not exist in the current database.","target":"role.count","timestamp":"2024-06-24T16:42:54.324Z","msg":"\nInvalid `db.role.count()` invocation in\n/app/scripts/seed.ts:775:24\n\n 772 },\n 773 ]\n 774 \n→ 775 if ((await db.role.count(\nThe table `public.Role` does not exist in the current database."}
... multiple of these for each record

This shouldn’t be too big of an issue to fix, though.

Solution: I wasn’t copying scripts since it wasn’t in the Dockerfile

COPY --chown=node:node scripts ./scripts/

# RUN yarn rw prisma migrate reset (remove seed)
RUN yarn rw prisma migrate deploy
RUN yarn rw prisma db seed

BTW relating issue fix to Docker and GraphQL codegen with fragments enabled: [Bug?]: Docker compose errors with graphql fragments · Issue #10872 · redwoodjs/redwood · GitHub and the fix [Bug?]: Docker compose errors with graphql fragments · Issue #10872 · redwoodjs/redwood · GitHub

1 Like