Self-hosted with Coolify

The Why?

With Bighorn on the way, I finally made the switch to a serverful environment. In doing so, I exhausted all the providers and realized in order for my startup to succeed I need full control over my app. For example, on render.com you can set up a redis instance but you aren’t able to configure its modules. It’s all managed for you with limitations. Here is where Coolify made a difference for my project. I can easily spin up my own redis or postgres databases and have full control over them inside a container. I can keep my databases on the same machine my app runs and not create internet traffic. They’re local, so nobody has access and the performance is incredible since the Redwood API just communicates with redis and postgres on the same machine. Not to mention the hidden costs serverless carries.

What it is

  • Coolify is essentially like CapRover or Dokku; a PaaS with everything you need to self-host your apps
  • Works with Docker under the hood to create containers which house your deployed apps
  • It is open-source and maintained by one dev. Currently V3 is stable and V4 is in beta.
  • Streamlines the deployment process and allows you to easily add other services like postgres, redis, s3 storage, mailers etc. without much of the groundwork you’d have to do yourself or the cost associated in other providers
  • No paywalls, and forever open-source with active development
  • Allows you to run a multi tiered architecture with remote servers (you could set up a VPS for coolify and deploy to another VPS where your app(s) are hosted)
  • Your data is private, you self-host it you own it!

Limitations

  • Docker Swarm and Kubernetes are not available
  • Standard devOps apply, so you still have a lot of work to do with backups, security and continuous maintenance of the systems and applications, ie. patching

Prerequisites

  • You’ve set up the server.js file using yarn rw setup server-file
  • You’ll need a VPS with at least 30 GB storage, 2 vCPUs and 2 GB of memory.
  • You’ll need to own your domain, but Coolify takes care of SSL through LetsEncrypt
  • Ports 22, 80 and 443 need to be open on your VPS firewall and port 8000 is needed for the initial setup
  • Set up Coolify following any guide or video (This guide only covers Redwood deployment)
    Great overview from Syntax and getting started guide
    Coolify documentation
  • SSH into your VPS and run the following command to install Coolify V4:
sudo curl -fsSL https://cdn.coollabs.io/coolify/install.sh | bash

Deployment

Let’s get into the meat of this guide. I’m assuming you’ve set up your Instance’s domain (under settings), your wildcard domain (under servers) and have either Traefik or Caddy as your proxy. I tested this using Traefik.

  1. Add your project in the UI under Projects (+Add)
  2. Select the environment you want it to run (production, but you could technically create a test environment too)
  3. Add a new resource, specifically a privateGitHub repository and select the server where your app will be running (localhost or remote)
  4. Next you’ll have to set up a GitHub App that has read access to your repository so just give it a name and Coolify will take you to GitHub for authorization
  5. Now, when creating your app for deployment make sure you select Docker Compose as your build pack (I have tried Nixpacks and Dockerfile but was unable to get Redwood deployed)
  6. You should see the following screen:

Now let’s head over to your VS Code and prepare our Redwood project for deployment.

  1. Run:
yarn rw experimental setup-docker
  1. Make sure in your redwood.toml you disable fragments (There is currently a bug and your deploy will fail, see my troubleshooting thread) and that browser.open equals false.
  2. Now when Coolify creates the image and Docker container, it mounts it at /app so a lot of the configuration in the Dockerfile you have is wrong. See this updated Dockerfile for Coolify and make sure to change the API domain for your proxy target:
# base
# ----
FROM node:20-bookworm-slim as base

RUN corepack enable
RUN apt-get update && apt-get install -y \
    openssl \
    && rm -rf /var/lib/apt/lists/*

USER node
WORKDIR /app

COPY --chown=node:node .yarnrc.yml .
COPY --chown=node:node package.json .
COPY --chown=node:node api/package.json api/
COPY --chown=node:node web/package.json web/
COPY --chown=node:node yarn.lock .

RUN mkdir -p /app/.yarn/berry/index
RUN mkdir -p /app/.cache

RUN --mount=type=cache,target=/app/.yarn/berry/cache,uid=1000 \
    --mount=type=cache,target=/app/.cache,uid=1000 \
    CI=1 yarn install

COPY --chown=node:node redwood.toml .
COPY --chown=node:node graphql.config.js .

# api build
# ---------
FROM base as api_build

COPY --chown=node:node api api
RUN yarn rw build api

# web prerender build
# -------------------
FROM api_build as web_build_with_prerender

COPY --chown=node:node web web
RUN yarn rw build web

# web build
# ---------
FROM base as web_build

COPY --chown=node:node web web
RUN yarn rw build web --no-prerender

# api serve
# ---------
FROM node:20-bookworm-slim as api_serve

RUN corepack enable

RUN apt-get update && apt-get install -y \
    openssl \
    curl \
    && rm -rf /var/lib/apt/lists/*

USER node
WORKDIR /app

COPY --chown=node:node .yarnrc.yml .
COPY --chown=node:node package.json .
COPY --chown=node:node api/package.json api/
COPY --chown=node:node yarn.lock .

RUN mkdir -p /app/.yarn/berry/index
RUN mkdir -p /app/.cache

RUN --mount=type=cache,target=/app/.yarn/berry/cache,uid=1000 \
    --mount=type=cache,target=/app/.cache,uid=1000 \
    CI=1 yarn workspaces focus api --production

COPY --chown=node:node redwood.toml .
COPY --chown=node:node graphql.config.js .

COPY --chown=node:node --from=api_build /app/api/dist /app/api/dist
COPY --chown=node:node --from=api_build /app/api/db /app/api/db
COPY --chown=node:node --from=api_build /app/node_modules/.prisma /app/node_modules/.prisma

ENV NODE_ENV=production

CMD [ "./api/dist/server.js" ]

# web serve
# ---------
FROM node:20-bookworm-slim as web_serve

RUN corepack enable

USER node
WORKDIR /app

COPY --chown=node:node .yarnrc.yml .
COPY --chown=node:node package.json .
COPY --chown=node:node web/package.json web/
COPY --chown=node:node yarn.lock .

RUN mkdir -p /app/.yarn/berry/index
RUN mkdir -p /app/.cache

RUN --mount=type=cache,target=/app/.yarn/berry/cache,uid=1000 \
    --mount=type=cache,target=/app/.cache,uid=1000 \
    CI=1 yarn workspaces focus web --production

COPY --chown=node:node redwood.toml .
COPY --chown=node:node graphql.config.js .

COPY --chown=node:node --from=web_build /app/web/dist /app/web/dist

# Change this to suit your configuration but make sure you're using http
ENV NODE_ENV=production \
    API_PROXY_TARGET=http://api.app.name:8911

CMD "node_modules/.bin/rw-web-server" "--api-proxy-target" "$API_PROXY_TARGET"

# console
# -------
FROM base as console

USER node

COPY --chown=node:node api api
COPY --chown=node:node web web
COPY --chown=node:node scripts scripts
  1. Next, let’s edit the docker-compose.prod.yml and also make sure you change the domain here and add any environment variables you’ll need:
version: "3.8"

services:
  api:
    build:
      context: .
      dockerfile: ./Dockerfile
      target: api_serve
    ports:
      - "8911:8911"
    environment:
      - NODE_ENV=production
      - DIRECT_URL=
      - DATABASE_URL=
      - SESSION_SECRET=
      - REDIS_URL=
    healthcheck:
      test: curl -f http://api.domain.name:8911/graphql/health || exit 1
      interval: 10s
      start_period: 10s
      timeout: 5s
      retries: 3

  web:
    build:
      context: .
      dockerfile: ./Dockerfile
      target: web_serve
    ports:
      - "8910:8910"
    depends_on:
      api:
        condition: service_healthy
    environment:
      - NODE_ENV=production
      - API_PROXY_TARGET=http://api.app.name:8911
  1. You’re ready to commit and send your latest code to GitHub but don’t do it yet, we have to head over to Coolify since we left the configuration empty.

  1. Add this Custom Start Command (The -d flag is necessary to run your service in the background and for your deployment to finish):
docker compose -f ./docker-compose.prod.yml up -d
  1. Make sure Auto Deploy under the Advanced tab is set to true if you wish to immediately deploy any commits to your branch. Coolify handles this with webhooks and the GitHub App we created.
  2. You don’t need any additional settings. Simply commit your code changes and wait for Coolify to pick them up.
  3. You’ll notice under the Deployments tab Coolify has begun deploying your application. Take a look inside and you should see it preparing the container with your image and work through the Dockerfile and compose commands we set up for api and web. You’ll know you’re good when the last line says “New container started.”
  4. Now head over to the Logs tab and you’ll see your server running the api and web container and their respective output. The web server should be listening on 0.0.0.0:8910 and the api server on 0.0.0.0:8911
  5. On the Command tab you have a CLI for your debian image and you can run commands by selecting the container. Note again that the working directory is /app where your files are located.
  6. The last thing you need to do is go back to the Configuration tab and set domains for your api and web containers. Make sure to use https:// here, for example:
  7. You might have to redeploy but if you see a green Running status try and visit your domains. Your app should be up and running and you’ll see incoming traffic on the Logs tab for the api container, such as auth requests. Congratulations you’ve successfully deployed your Redwood app with Coolify! If you run into trouble post in this thread and I’ll do my best to assist.

What’s not working

  • I noticed a bug deploying with graphql fragments enabled but my app still works with fragments, it’s just magical. I opened a bug issue here.
  • Coolify has a bridge network where your deployment runs under. If you need to seed your database or perform a prisma migrate you’ll have to ensure your postgres db or other services are also available in the bridge network. See my last troubleshooting post here.

Extra services

I do not recommend adding a postgres or redis into your project deployment (the docker-compose.yaml file I provided above). These should be separate resources you can add through the Coolify UI. You’ll want to know which service is and is not running, so it’s best to split them in different resources but still in the same project. You’ll have to enable the “Connect To Predefined Network” under Advanced in your redwood deployment configuration if you want your api to be able to reach other resources such as redis and postgres.

The only issue I’ve ran into is that the redis resource from Coolify appends the --appendonly yes flag, so you’ll always have AOF. If you do not want redis persistence you’ll have to create your own docker-compose for redis and add it that way. You also can’t change the default password with the redis instance you’re adding from the Coolify UI.

Thank You

Finally I want to thank everyone who worked on the Dockerfile and compose yaml files to give me a starting point to do this and @xmascooking for help and input in the original troubleshooting thread. Try it out and see how you like, sooner or later you’ll have to make a switch to a serverful environment. Why not save money and have full control over your business?

6 Likes

Jey @jamesj This is a great guide. I wanted to check in a little over a month later and ask how things are going on Coolify? Any problems? Much of a burden from a maintenance point of view?

Not at all! I’ve since spun up several services and connected them; hashicrop vault, minio, redis, postgres etc. everything you’d need is right there! Coolify does have a lot of issues open but the guy maintaining it is a beast. I recommend upgrading through the curl command through ssh instead of the web ui. I know the discord is always very busy and there’s tons of feature requests, for example I can’t wait to see more detailed resource monitoring for each container in the web ui.

I do recommend getting used to docker though especially the CLI (docker ps is my best friend). Even though Coolify gives you a lot of premade images it’s hard to customize them. I love being able to write my own docker compose and tailor the service to my needs. If you use the provided images a lot is abstracted and some images have weird hardcoded startup parameters. Overall if you just ssh into your box you can do a lot with the docker CLI.

Overall the deployment for my redwood app (with over 2500 lines of code in my schema and several pages and components) takes about 240s for the initial deploy and then anywhere between 20-40s. I really like how in-depth the deployment logs are and it’s easy to fix bugs or issues. I don’t miss Vercel or Netlify one bit :joy: It’s easy to set up auto deploys from github and integrate a CI/CD pipeline. Once you have a service running it’s easy to manage it and Coolify essentially just uses docker under the hood for everything.

Performance wise I can’t say yet, but my box runs on DigitalOcean and it’s one of the lowest tiers; once I break into the market and thousands of users will be hitting my website I’ll probably have to upgrade but right now the resources aren’t stretched and load times are great.

  • Avoid premade images (it’s a noob trap) and focus on setting up your own docker-compose files
  • Setup is actually pretty easy and the UI is good enough to get the job done (responsive and allows you to be notified if something is not meeting its healthcheck. Looking forward to the UI overhaul he’s planning to do though)
  • Easily drain logs to something like Axiom
  • Easily schedule backups for your services (can be local on the VPS or another VPS)
  • Weakest part of Coolify is the networking, you have to manage it yourself and this is where the docker CLI comes in handy, also I’m by no means a traefik expert, but you can use caddy as well for your reverse proxy
  • SSL was very easy to set up (and free)
  • I can keep services local and next to each other so my api directly communicates to my redis, minio or postgresql server and traffic is never exposed to the internet (what a godsend, honestly. Especially if you’re dealing with sensitive data and trying to stay compliant with GDPR etc. just makes life easy)
  • I even have RW studio and prisma studio in a container exposed to the internet but a firewall rule that only permits my IP address so I can see what’s going on without much security worries. Restarting or stopping containers is very easy. (To test things in dev I have a local redis, postgres and minio server)
  • Coolify has docker swarm support but it’s experimental. If you didn’t need the UI and you’re well versed in docker you could just use Dokku on a VPS but Coolify does have a lot of goodies out of the box (SSL being a big one, it just works)

Let me know if you have any specific concerns or questions and I’ll try to address them! No problems, honestly it’s been pretty smooth after the initial setup and learning how to use it.

Thanks a lot for sharing your self-hosted/serverful journey. We are contemplating making a full switch away from Azure ourselves and Coolify is something we have looked into.

Have you considered other VPS providers than DigitalOcean? We use Hetzner for some of our containers and found their price for performance is really good.

If you don’t mind me asking, what prompted the wish to switch from Azure?

For my production setup I have my own company VPS where I run a “master” Coolify and use it to connect my customer’s VPS Coolify as a server. This way I can track every customer’s Coolify server and services. Easy way to figure out if something has gone wrong. I’d recommend that setup since you can push updates from your CI/CD pipeline from your master to your customers. I also keep a test VPS that’s 1:1 with production before I deploy to every customer (staged roll outs, not like Crowdstrike :joy: ). My pipeline is not perfect yet, still a lot of work to do there but I have the vision. I just prefer being in complete control and Coolify allows me to do that. I looked into Kubernetes but setting that up is beyond me, maybe when I grow too large I’ll make that switch and hire several engineers to help me with that. The beauty of being able to control your environments with docker and have predictable environments is good enough for me right now and again, Docker Swarm is experimentally supported.

I actually considered Hetzner as well, but I fell in love with the DigitalOcean UI and I have had 0 downtime for two months now, whereas I’ve seen Coolify itself have outages (they’re hosting on Hetzner). I haven’t had issues with performance but this is definitely worth investigating, also how much you can trust the company to stay afloat. In all honesty one of my project items is to make sure I have backups and replication of my data spread across different providers, so if DigitalOcean decides to nuke my account I can easily switch to Hetzner and vice versa. I also explored AWS but to me the UI is not user friendly at all, there’s so many hidden settings and options I never truly know what’s going on.

I would not recommend putting all your trust in any one hosting company. Preferably you grow so big you can spin up your own infrastructure and never have to worry about stuff like this.

Azure is a luxury in terms of how easy it is to get started and how manageable it is to scale as you grow. I don’t think we benefit enough from their nice UI and managed everything. So one of the major reasons for migrating is the cost of compute/storage/etc. With something like Hetzner or similar you get so much more bang for your buck. See this price/performance sheet from the guys over at https://reclaim-the-stack.com/

Maturity-wise there are some downsides like a blown fuse can happen and may wipe out an entire rack, but I’ve read people try to mitigate those issues by buying servers one by one rather than in bulk, so they are spread across racks.

Another benefit is being in full control of everything, but the downside is having to handle all security which should be manageable but seem scary at first glance.