Dockerize RedwoodJS

Awesome, thanks for the notes, @realStandal! Quick update on our side, we’re currently in talks with the Qovery folks and will hopefully have some docs for deploying there as well in the near future.

The plot thickens!

I’ve replaced the @redwoodjs/cli dependency on the API-side to use @redwoodjs/api-server. It seems @redwoodjs/internal is one of its dependencies but isn’t added in its package.json.

# Dockerfile.api
- RUN yarn install && yarn add react react-dom @redwoodjs/cli
+ RUN yarn install && yarn add react react-dom @redwoodjs/api-server @redwoodjs/internal

COPY graphql.config.js .
...

- CMD [ "yarn", "rw", "serve", "api", "--port", "8911" ]
+ CMD [ "yarn", "rw-api-server", "--port", "8911" ]

For the web, I replaced the dependency on @redwoodjs/cli for http-server with a catch-all redirect.

# Dockerfile.web
- RUN yarn add react react-dom @redwoodjs/cli
+ RUN yarn add react react-dom http-server

EXPOSE 8080

- CMD [ "yarn", "rw", "serve", "web" ]
+ CMD [ "yarn", "http-server", "web/dist", "--proxy", "http://localhost:8080?" ]
  • The final API image decreased from 748mb to 530
  • The final web image decreased from 630 to 136

Pleasantly wrong on the web-side.

I’ve also been working on it having a place on IBM’s Code Engine.

2 Likes

This is great, we’re working on some Docker golf of our own so I’ve forwarded this over to the Fly team.

Curious what appeals to you about IBM’s Code Engine? Just wondering cause I’ve never used it myself.

Nice! (What do you mean by Docker golf? lol - just that y’all are going back and forth?)

I made a few more updates to get it into working order (to deploy to code engine), mostly around starting a container:

  • I added USER 1100:1100 to both side’s images - done during the serve stage so the commands which run these sides aren’t being ran as root. (reference).
  • I swapped out http-server for local-web-server on the web. It provides a bit more control and configuration (used below).
  • I moved all CMD declarations on both images to ENTRYPOINT, leaving CMD open for overwriting.
  • Updated ports to follow Redwood’s defaults - the API will use 8911, web 8910.
  • Removed yarn from each side’s ENTRYPOINT command, installing both services used as global dependencies. This follows suit with another recommendation from Code Engine surrounding performance.

Here is the gist of each side’s Dockerfile - which I’ve used to deploy to Code Engine. (Again, these changes are focused in the “Serve” stages)

I do have rw deploy and rw setup commands created which will integrate with IBM’s CLI to help ease and configure deploying - have even been able to deploy to code engine using them. I’ll probably sit on them if/until Redwood’s Docker story fleshes out a bit more - so I can point to a rw setup docker command to be ran alongside Code Engine’s.

Curiosity and personal preference, I’ve admired IBM since I was real young. But, this was mainly just for science. From the four providers I looked at (IBM, GCP, Azure, AWS), IBM had the lowest costs for their serverless containers and their functions do not play well with how Redwood’s are currently setup.

To not turn this into my own personal “bash AWS” I’ll just leave it at I’m not a huge fan of anything with Amazon related to it - emotionally driven opinions :grinning_face_with_smiling_eyes: But from a practical standpoint, they are probably the best choice - particularly for start ups, their freebies look like they could go a long way. Not out of kindness, though, I’m sure. All that to say I’m always happy to experiment with options that aren’t AWS and which flip the bird at lock-in.

1 Like

Ahhh, my apologies, it’s a play on the term code golf.

Code golf is a type of recreational computer programming competition in which participants strive to achieve the shortest possible source code that implements a certain algorithm.

Meaning we are trying to get the smallest possible Docker image that is able to run the application.

All that to say I’m always happy to experiment with options that aren’t AWS and which flip the bird at lock-in.

Same, always interested in exploring alternative deployment methods. “Universal deployment machine,” would imply you can deploy to something that isn’t AWS.

All these changes looks great! Especially the change from CMD to ENTRYPOINT.

What is your stance on the front end webserver? Considering you changed http-server to local-http-server, to allow more control, would it make more sense to opt for Nginx or Caddy that is wildly adopted and highly customizable?

I have only used http-server for local development and don’t know how much e.g. local-http-server vs nginx or caddy weights or how much build time it adds.

Personally I considering using Caddy, where I currently use nginx described here.

@ajcwebdev @realStandal @thedavid
Should we coordinate the state of our Docker files in a mutual repo and maybe create a CI-pipe to test and benchmark these? Ultimately we find a baseline that would hopefully end up in the official Redwood repo (after some disussions on how we could approach local docker development etc).

Ahh, it makes so much sense now that you point it out lmao

I really think the API side can still be cut back. I’m going to say the web-side is a lost cause again, cause that seemed to work out well before.

None really, I choose the two I did out of simplicity - I knew I wasn’t pushing the app with the intent on keeping it in production and haven’t spent enough time fiddling with web-server’s to have justified spending a day wrestling with one.

Just from quickly looking over Caddy, it looks really nice - I’m tempted to start hacking away on it now. My main concern would be loosing flexibility (which it doesn’t look like we would). One of the reasons I made the ENTRYPOINT change was to keep the ambiguity during deployment. When I deployed to code engine, I had to add a rewrite rule that directed certain requests at the web-side to the API. However, I didn’t want to have to bake that URL into the image itself, so that one RW project could be deployed to many different providers/regions - without multiple images for each. The latest setup I shared let’s that be the case, where I can add -r '/api/(.*) -> https://api.BLAH.REGION.codeengine.appdomain.cloud/$1' as an entry to CMD.

I do lean more towards using something like Caddy - over local-web-server or even RW’s rw serve web - but I definitely see the desire to keep RW’s tooling in place. Best case, swapping out may even mean we can completely remove Node from the image’s final stage, right? Sounds like an easy place to save a bit of space.

I’m all for it :+1:

Yes, and I’m pretty sure nginx:alpine would be sufficient. I currently run nginx as final stage.

I don’t know IBM’s Code Engine, but could you add these rewrites rules on a edge proxy somewhere? I.e. load balanace /api request directly to the api container(s) without the need of hitting the web container(s).

Let me know if you come up with something cool. Havn’t had the time to play around with Caddy properly but definitely looks cool. :slight_smile:

Love it, the more data the better.

Just skimming through the comments right now. If this was my app, I’d consider a service that builds web/dist and then throws away everything else web related. Would then handle assets via Nginx config. Could be included with API as individual service or as a distinct service.

But I’d make that decision based on how I’m handling my static assets — e.g. use Cloudflare as my CDN and adjust deployment process accordingly (maybe I wouldn’t even need Nginx). You could even deploy the assets to Netlify CDN via their CLI (yes, that’s a thing!).

My point: there will never be one Docker setup to rule them all as requirements will vary based on infra, hosting, and networking. So I’d defer to whatever feels like the most foundational parts people could then modify per their requirements and setup.

1 Like

I converted my web-side’s Dockerfile to extend nginx:1.21.3-alpine, dropped the final image from 136MB down to 27 (from 630 to 27, what a journey).

Shoutout to the configuration on @jeliasson’s Kubernetes post - I did have to add include mime.types; under the http block to get my stylesheets to load.

Did the same using caddy:2.4.5-alpine, it’s at 44MB. Caddy does require persistent volumes; but if you were setting up TLS certificates for your nginx-based build, you’d need them too anyway, right?

Just from setting the two up, I think it’d be easy enough to have a cookbook about swapping one out for the other (or X out for Y, for that matter), if that was someone’s desire - it took me about 10 minutes to swap Nginx over to Caddy, most of which was just bouncing between and reading pages on their docs.

From having no experience with either, Caddy was quite a lot easier to get started with - their documentation and forum felt a lot more approachable. Had I not had Jeliasson’s post, I would have spent a lot more time on Nginx’s configuration.

And then here are the final files:


I’m assuming this is what you meant?

Very reasonable imo

2 Likes

@realStandal I’m glad the Kubernetes post helped out. It was fun writing it :slight_smile:

:exploding_head: Whoa. Well done!

Pretty much. Sure, you could build the image along with your TLS certificates and thus not need a “persistent storage”. This is where a HTTP proxy in front of api and web would be beneficial. This is definitely the case if you would be using Let’s Encrypt and running more than one replica.

This is great! I’ll finish up our testing repo with a copy-pasta CI-pipeline this weekend and get back to you.

2 Likes

Looking forward to seeing it all put together.

Thank you for taking the time to educate, so is the lack of security between the proxy and container negligible because the container’s only being accessed through it, or does that connection inherit it through the proxy? Or am I off the ball completely :joy:

1 Like

You’re right on the ball. Depending on your setup, I would say it’s negligible. You could have a proxy in front of everything that does the heavy TLS termination and then forward that traffic, unencrypted, to the respective containers ([client] <HTTPS> [proxy] <HTTP> [containers]).

DM me on Discord if you’d like to discuss it further. :v:

2 Likes

What if we publish a basic Nginx container for Redwood that handles /web/dist?

Easy to create a repo and a GH action for publishing.

fwiw

1 Like

Just to clarify, do you mean maintain a official one for the web, e.g. build and publish redwoodjs/web:0.37.0-nginx? If so, we’re defo on the same thought train :slight_smile:

1 Like

Yes, with boilerplate settings baked in!

2 Likes

3 Likes

@ajcwebdev @realStandal @thedavid @doesnotexist
I sent you a collaboration invitation to a redwoodjs-docker repo.

3 Likes

I’ve started looking at reviving the gcp deploy with dockerfile (feat(gcp): adding support for GCP project generation by bcoe · Pull Request #1225 · redwoodjs/redwood · GitHub) but haven’t finished getting a working deploy yet. I’ll let you know how it goes. I also wanted to try out the buildpack gcp to compare, but there’s a possibility that could just use the common dockerfile for this as well.

3 Likes