Dockerize RedwoodJS

Ahhh, my apologies, it’s a play on the term code golf.

Code golf is a type of recreational computer programming competition in which participants strive to achieve the shortest possible source code that implements a certain algorithm.

Meaning we are trying to get the smallest possible Docker image that is able to run the application.

All that to say I’m always happy to experiment with options that aren’t AWS and which flip the bird at lock-in.

Same, always interested in exploring alternative deployment methods. “Universal deployment machine,” would imply you can deploy to something that isn’t AWS.

All these changes looks great! Especially the change from CMD to ENTRYPOINT.

What is your stance on the front end webserver? Considering you changed http-server to local-http-server, to allow more control, would it make more sense to opt for Nginx or Caddy that is wildly adopted and highly customizable?

I have only used http-server for local development and don’t know how much e.g. local-http-server vs nginx or caddy weights or how much build time it adds.

Personally I considering using Caddy, where I currently use nginx described here.

@ajcwebdev @realStandal @thedavid
Should we coordinate the state of our Docker files in a mutual repo and maybe create a CI-pipe to test and benchmark these? Ultimately we find a baseline that would hopefully end up in the official Redwood repo (after some disussions on how we could approach local docker development etc).

Ahh, it makes so much sense now that you point it out lmao

I really think the API side can still be cut back. I’m going to say the web-side is a lost cause again, cause that seemed to work out well before.

None really, I choose the two I did out of simplicity - I knew I wasn’t pushing the app with the intent on keeping it in production and haven’t spent enough time fiddling with web-server’s to have justified spending a day wrestling with one.

Just from quickly looking over Caddy, it looks really nice - I’m tempted to start hacking away on it now. My main concern would be loosing flexibility (which it doesn’t look like we would). One of the reasons I made the ENTRYPOINT change was to keep the ambiguity during deployment. When I deployed to code engine, I had to add a rewrite rule that directed certain requests at the web-side to the API. However, I didn’t want to have to bake that URL into the image itself, so that one RW project could be deployed to many different providers/regions - without multiple images for each. The latest setup I shared let’s that be the case, where I can add -r '/api/(.*) -> https://api.BLAH.REGION.codeengine.appdomain.cloud/$1' as an entry to CMD.

I do lean more towards using something like Caddy - over local-web-server or even RW’s rw serve web - but I definitely see the desire to keep RW’s tooling in place. Best case, swapping out may even mean we can completely remove Node from the image’s final stage, right? Sounds like an easy place to save a bit of space.

I’m all for it :+1:

Yes, and I’m pretty sure nginx:alpine would be sufficient. I currently run nginx as final stage.

I don’t know IBM’s Code Engine, but could you add these rewrites rules on a edge proxy somewhere? I.e. load balanace /api request directly to the api container(s) without the need of hitting the web container(s).

Let me know if you come up with something cool. Havn’t had the time to play around with Caddy properly but definitely looks cool. :slight_smile:

Love it, the more data the better.

Just skimming through the comments right now. If this was my app, I’d consider a service that builds web/dist and then throws away everything else web related. Would then handle assets via Nginx config. Could be included with API as individual service or as a distinct service.

But I’d make that decision based on how I’m handling my static assets — e.g. use Cloudflare as my CDN and adjust deployment process accordingly (maybe I wouldn’t even need Nginx). You could even deploy the assets to Netlify CDN via their CLI (yes, that’s a thing!).

My point: there will never be one Docker setup to rule them all as requirements will vary based on infra, hosting, and networking. So I’d defer to whatever feels like the most foundational parts people could then modify per their requirements and setup.

1 Like

I converted my web-side’s Dockerfile to extend nginx:1.21.3-alpine, dropped the final image from 136MB down to 27 (from 630 to 27, what a journey).

Shoutout to the configuration on @jeliasson’s Kubernetes post - I did have to add include mime.types; under the http block to get my stylesheets to load.

Did the same using caddy:2.4.5-alpine, it’s at 44MB. Caddy does require persistent volumes; but if you were setting up TLS certificates for your nginx-based build, you’d need them too anyway, right?

Just from setting the two up, I think it’d be easy enough to have a cookbook about swapping one out for the other (or X out for Y, for that matter), if that was someone’s desire - it took me about 10 minutes to swap Nginx over to Caddy, most of which was just bouncing between and reading pages on their docs.

From having no experience with either, Caddy was quite a lot easier to get started with - their documentation and forum felt a lot more approachable. Had I not had Jeliasson’s post, I would have spent a lot more time on Nginx’s configuration.

And then here are the final files:


I’m assuming this is what you meant?

Very reasonable imo

2 Likes

@realStandal I’m glad the Kubernetes post helped out. It was fun writing it :slight_smile:

:exploding_head: Whoa. Well done!

Pretty much. Sure, you could build the image along with your TLS certificates and thus not need a “persistent storage”. This is where a HTTP proxy in front of api and web would be beneficial. This is definitely the case if you would be using Let’s Encrypt and running more than one replica.

This is great! I’ll finish up our testing repo with a copy-pasta CI-pipeline this weekend and get back to you.

2 Likes

Looking forward to seeing it all put together.

Thank you for taking the time to educate, so is the lack of security between the proxy and container negligible because the container’s only being accessed through it, or does that connection inherit it through the proxy? Or am I off the ball completely :joy:

1 Like

You’re right on the ball. Depending on your setup, I would say it’s negligible. You could have a proxy in front of everything that does the heavy TLS termination and then forward that traffic, unencrypted, to the respective containers ([client] <HTTPS> [proxy] <HTTP> [containers]).

DM me on Discord if you’d like to discuss it further. :v:

2 Likes

What if we publish a basic Nginx container for Redwood that handles /web/dist?

Easy to create a repo and a GH action for publishing.

fwiw

1 Like

Just to clarify, do you mean maintain a official one for the web, e.g. build and publish redwoodjs/web:0.37.0-nginx? If so, we’re defo on the same thought train :slight_smile:

1 Like

Yes, with boilerplate settings baked in!

2 Likes

3 Likes

@ajcwebdev @realStandal @thedavid @doesnotexist
I sent you a collaboration invitation to a redwoodjs-docker repo.

3 Likes

I’ve started looking at reviving the gcp deploy with dockerfile (feat(gcp): adding support for GCP project generation by bcoe · Pull Request #1225 · redwoodjs/redwood · GitHub) but haven’t finished getting a working deploy yet. I’ll let you know how it goes. I also wanted to try out the buildpack gcp to compare, but there’s a possibility that could just use the common dockerfile for this as well.

3 Likes

Please do! Are you uploading it to their container service, I glanced at the PR it looked more like something with Firebase?

How difficult would it be to include start up time in the size/build-time comparisons?

It’s using Firebase for the web hosting but the API is on Cloud Run which I think is what you are referring to by Google’s container service.

1 Like

I figured out a Docker Compose setup, will do a full write up soon but in the meantime check out the repo here.

version: "3.9"
services:
  web:
    build:
      context: .
      dockerfile: ./web/Dockerfile
    ports:
      - "8910:8910"
  api:
    build:
      context: .
      dockerfile: ./api/Dockerfile
    ports:
      - "8911:8911"

Edit: Write up can now be found here.

2 Likes

Beautiful stuff @ajcwebdev!

I’m a bit behind my contributions, but I’ll catch up soon. Feel free to send a PR to jeliasson/redwoodjs-docker when the write up is done, or ping me and I’ll do it :slight_smile: