Running in Docker

Tried to create a docker to run redwood app in it, without any success. The ports are not bound and not exposed to the outside.
Steps:

  1. Clone the ’ create-redwood-app’ repository
  2. Added Dockerfile and docker-compose.yml file
  3. ran docker-compose up --build -d.

any suggestions will be appriciated

Dockerfile:
FROM node:latest WORKDIR /usr/src/app COPY ./package.json . RUN npm install -g node-gyp RUN yarn install COPY . . EXPOSE 8910 EXPOSE 8911 RUN yarn redwood dev

docker-compose.yml:
`

docker-compose.yml

version: “3”
services:
app:
build: .
ports:
- “8910:8910”
- “8911:8911”
command: yarn redwood dev
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
- /usr/src/app/api/node_modules
- /usr/src/app/web/node_modules`

Hi @wiezmankimchi I’d be excited to see a working Docker configuration! But to the best of my knowledge, you’d be the first one here to get it done. Let’s see if I can help you move forward.

First off, Redwood (RW) uses Yarn Workspaces, which changes several things, including how the root package and config are applied to the Web and API directories. Packages and config from root are hoisted to both API and Web, so you’ll need to account for that. Secondly, all our CLI commands take advantage of Workspaces and/or target both Web and API at the same time. E.g. yarn rw build is actually running build for both the Web and the API at the same time. You can add an option to just target API, for example, with yarn rw build api, but even in that case, there will still be packages and config from the root that yarn uses to build API. You can see the source code for the CLI commands here. You might need to write your own scripts.

Aside: we’ve been having various issues with Node versions the past week. I see you’re using Node: latest container, which will pull v14.x. I highly recommend you stick with the lts tag (currently v12.x), and you could likely get away with lts-alpline.

Aside Part 2: does Docker node come with yarn installed? I have no idea but you should confirm and install if not. Oh, I saw you were using npm in the Dockerfile – you’ll have to use Yarn.

Depending on how you handle API and Web (either as a single or individual services) you’ll need to mount your volumes accordingly. And determine what the equivalent of yarn rw dev will need to be.

Lastly, I don’t understand why you have node_modules in your Compose file. Personally I .dockerignore that directory and let yarn install handle this inside the container.

I hope that helps with the next steps. Keep me posted.

I setup a docker compose dev environment yesterday (using Docker Desktop) to try out the RedwoodJS tutorial and it worked for me. I think my setup/goal was a little different than @wiezmankimchi though. But here are the steps I did

  1. mkdir redwood_apps && cd redwood_apps
  2. Create docker-compose.yml:

version: “2”
services:
node:
image: “node:12”
user: “node”
working_dir: /home/node/apps
environment:
- HOST=0.0.0.0
volumes:
- ./:/home/node/apps
ports:
- “8910:8910”
- “8911:8911”
tty: true

  1. docker-compose up -d
  2. docker exec -it redwood_apps_node_1 bash
  3. In container: yarn create redwood-app ./redwoodblog
  4. In container: cd redwoodblog
  5. Add host = “0.0.0.0” entries to the redwoodblog/redwood.toml files in the [web] and [api] sections (as per https://redwoodjs.com/reference/app-configuration )
  6. In container: yarn redwood dev
  7. Open a browser window on host machine to http://localhost:8910/

Thanks for posting this @pickettd It’s a helpful proof of concept! I’m curious, why were you interested in using Docker in the first place?

Fwiw, if you’re interested in improving this setup, you could run yarn create on your system, use volume mounts in your Docker Compose, and then include steps like number 8 in a Dockerfile. Otherwise, I’m wondering how you were accessing/editing the App files unless all via bash… ?

I’ll just throw my 2 cents in as a Docker fan:

Docker is pretty dang portable. With it, you could deploy to Heroku (both app and db), amongst other places like my personal favorite: Google Cloud Run. With Cloud Run, you get scale to zero “serverless” containers, but aren’t functions, but images rather. You end up with a url, and the app is expected to bind to the PORT variable, and it just handles scaling. The nice thing about this is that connection pooling to Postgres become less of an issue.

1 Like

I started playing with this over the weekend, and, as I suspected, there are a few non-trivial aspects to the setup:

  • implementing separate API and Web services using the current Yarn Workspace structure: you have two package.json files along with config in root that applies to both API and Web services
  • targeting the yarn rw dev command separately for DB, API, and Web (and generating the Prisma Client appropriately)

It’s possible with the right .dockerignore files and volume mounting configuration, but I’m definitely not there yet.

I also have a Docker usecase. For my needs I’m wanting to package web & api in separate containers. It was easy enough to get the compiled web app running in a container served by nginx, however the api took some hackery.

api dockerfile:

FROM node:lts-alpine

ENV PORT 8911

RUN apk --no-cache add git

COPY . /app/api
COPY docker/ /app/

WORKDIR /app

RUN yarn install
RUN cd api && yarn install && yarn add @redwoodjs/core
RUN yarn rw build api

EXPOSE $PORT

CMD yarn rw dev api

So what I’m doing here is copying everything from /api into /app/api, and then I created a /docker dir which contains the following files that are copied into /app:

  • .babelrc.js
  • .env.defaults
  • babel.config.js
  • package.json
  • redwood.toml

I did this for two reasons, one is that my Dockerfile for api lives in the /api dir which means you can’t reference files outside of that directory. The second reason is that I wanted to modify some of the files without touching the originals. So it’s a little messy.

docker/package.json:

{
  "name": "app",
  "version": "0.0.0",
  "private": true,
  "dependencies": {
    "@redwoodjs/api": "^0.12.0",
    "@redwoodjs/core": "^0.12.0",
    "netlify-plugin-prisma-provider": "^0.3.0"
  }
}

docker/.babelrc.js

module.exports = { extends: 'babel.config.js' }

I also had to add @redwoodjs/core to /api/package.json otherwise the commands aren’t found.

My docker-compose file:

version: '3.6'

services:
  api:
    image: app-api
    build:
      context: ./api
      dockerfile: Dockerfile
    ports:
      - 8911:8911
  web:
    image: app-web
    build:
      context: ./web
      dockerfile: Dockerfile
    ports:
      - 8910:8910

Another issue is that requests from the web container are currently routed to itself, e.g. http://localhost:8910/.netlify/functions/graphql

It looks like this could be fixed by changing apiProxyPath in redwood.toml, but for now I’m just proxying the request in my nginx config.

Otherwise this seems to work pretty well. :slight_smile:

Hi @nerdstep! Thanks for adding your setup to this conversation. And well done :rocket: I wish there was an “official” method for handling services in the case of Yarn Workspaces. Seems like we can’t be the first to have stumbled into this situation.

The second reason is that I wanted to modify some of the files without touching the originals.

^^ Curious about this as it’s an interesting workaround:

  • are you handling the Web service in the same way?
  • do you require different file modifications for API vs. Web?
  • did you every try to copy each from root into respective service directories?

A few other questions/thoughts if you have the time:

re: CMD yarn rw dev api → wondering if you’ve thought about how to serve the API in production (assuming you’re deploying as containers as well)?

re: requests and apiProxyPath → did you see that both host and port can be set for Web and API? Doesn’t seem like you need to do so given the Nginx config solution, but just in case you need to do so in the future: App Configuration | RedwoodJS Docs

And the apiProxyPath setting only comes into play in production – in your case might not matter if you’re handling networking connection with Docker/Kubernetes.