We’re working on adding support for containers out of the box to RedwoodJS. Currently I think we’d be looking to add this via a setup command similar to:
rw setup [container]. Our main goal is to support production deployments initially. That being said, we’d love to hear about some of your use-cases for Docker/Containers with Redwood. For example, do you containerize the
web sides separately and deploy them to say ECS or EKS, or do you deploy one container to rule them all containing both the
web sides? Do you include
nginx-proxy in your container? I’m not sure that we’ll support everyones use case, but I think it’s useful to get some ideas on how people are already using or would like to use Docker et al with Redwood.
@dom Are you coordinating the Docker effort?
Yep, he is. We’ve broken up a few tasks. I’m on the ‘gather use-cases’ task.
I am looking into this at the moment and I am thinking of how I want my docker to be set up. I have been thinking of having it so that the web/api can be separated. This is because I imagine the frontend of my app can be hosted on a small instance as it wont change much size wise and let the backend api sit somewhere that can scale as needed.
@dthyresson Thanks for connecting the pieces! This is the new forum category I mentioned where only members of the Container Working Group can create new posts, which is cool for this very reason that others in the community can identify/know that @rainkinz is on that team.
Thoughts how I can make that more clear?
(And thanks again for kicking this discussion off, @rainkinz!)
Thanks, I think that’s definitely a use case we’re thinking about, and definitely makes sense. I’ll make sure it’s added to our documentation.
We’re running it as one container and with nginx in another container next to it.
There’s also the case, that we have the database on a different server
we use api and web in separate PODs, serving the web with nginx - I’m not allowed to touch the POD configs so I need to deliver as separate Dockerfiles
At my other job I would like to deploy to EKS as two containers so they can be scaled independently
A Total Bonus would be helm deployment
I have not implemented a docker setup yet. But in my current setup it would probably look something like this.
- Ubuntu VPS as my host
- Run api in a docker container
- Volume mount my SQLite db
- Run scheduled jobs using cron on host and docker container exec
- Using Caddy as reverse proxy to my container
- Using Caddy as web server to serve web
I think the main reason why I did not adopt this yet, is that I have no simple strategy to deploy new versions of the api, without having to build a new docker image.
Having to deal with image building and hosting for each deployment adds complexity that I don’t need. Maybe it helps to hot-swap api’s source files in a running container to circumvent build step?
Thanks, I’ve added these to my notes. How is the EKS deployment different from the POD deployments though? Wouldn’t you have a Dockerfile for web and a Dockerfile for api in both cases? Thanks again
Thanks! I’ve add this to my notes. I wonder if using something like docker-compose with the local file system mounted via bind mounts would work in this case? I’ll see if I can come up with a little proof of concept maybe.
I’m using containers in my dev environment to e2e test that custom generator templates are correctly applied by the Redwood CLI. I have a script that uses docker-compose to launch the container and tests, and a
.husky post-commit hook that uses dockerode to build a new image when files in the scaffold directories change.
Inside the container, I’m using
clet in the first test to launch
rw and handle interactive prompts, test that files are generated, and test the
rw return code. My setup is a little ugly because subsequent tests (that test the generated code in files) depend on the first test completing successfully. I’m using a custom Jest test environment that fails fast (jest-environment-steps) to account for that dependency between tests.
I’m just writing
STDOUT from the Jest test to a file on my host system, and looking at it when the Docker
CMD (which launches the Jest test) fails (which the Node package
docker-compose reports as the exit code).
My motivation is that while I only have a few custom templates right now, I expect to generate a good number of them on this project and probably be changing them over time.
The container should also run migrate and data-migrate on startup.
I have migrate working (and I think it comes from the db templates) but I’m struggling with data-migrate.