I’m willing to bet that the way you integrate Redwood with docker based apis is to deploy the containers anywhere and connect to it like you would any other third party API right?
I figured it was worth posting in here in case there are some best practices specific to redwood for doing this. One thing I thought is that since my redwood app is deployed with netlify, it would make sense to deploy containers to AWS since that’s what netlify uses and that might help keep latency down for requests on the backend?
Since amazon supports docker images as lambdas it would be amazing if netlify supported just dumping dockerfiles into the functions folder and have them built and deployed as lamdas, but that’s just me dreaming I guess
Tiny bit of context, I’ve just started working on something in docker, and it has to be docker (I’m pretty sure) because it does some OS specific things.
Working on an api for openscad: https://twitter.com/IrevDev/status/1365021408523218944
work so far: Comparing main...kurt/openscad-api · Irev-Dev/cadhub · GitHub
To support your idea, let me share the screenshot showing the process of updating Discourse (this very tool we are all using to share ideas).
It is a scenario pretty similar to Redwood (an open source tool, that is available in the source code and the build process that is very difficult to execute without mistakes - see this installation manual
As a spot of follow up I ended getting something deployed to aws using the serverless framework, I’m actually using the docker-lambdas that aws introduced last year.
The PR that introduced most of the work is here (if the diff is of interest to anyone.)
but the result of the work so far can be seen at repo
what the docker container is doing in this bit of ui is returning a new image for each perspective change.
Video of it in action here: https://twitter.com/IrevDev/status/1371374443319029761
I thought about integrating the endpoint back into the graphql api, but it’s so stand along I thought it was fine to have it simple as a seperate rest endpoint, though that might change.
@Irev-Dev Woah, CadHub is looking You’ve made some great progress since the initial version I saw!
Docker has been coming up a lot lately. I’m noodling on something like a
yarn rw setup deploy docker that would create vanilla Dockerfiles for Web and API (possibly with a Docker Compose for running locally). I don’t think people would specifically use it for deployment, but I do think they’d use it as a foundation whatever hosting option they choose.
That said, would you be able to share your project’s Dockerfile(s)? I looked in the repo but didn’t see any.
Oh whoops looks like I didn’t actually link to the diff, it’s here: Create and deploy simple openscad api by Irev-Dev · Pull Request #228 · Irev-Dev/cadhub · GitHub
And the docker files are at: cadhub/api/src/docker at main · Irev-Dev/cadhub · GitHub
I doubt they’ll be of much use to some of the use-cases you’re highlighting @thedavid, since they’re docker-lambdas specifically for aws, there’s lots of aws specific stuff in there, i.e the docker compose is there to spin up a container per endpoint (i.e. like it were running on aws). But maybe they’ll help . To make it easier for new folks to clone and spin up my repo, spinning up the docker files is not required as by default it goes to the live endpoint in dev-mode.
I don’t mind the idea of postgress db in a docker container, though I’m not sure how you do persistent data well with docker
Thanks for your kind words, it’s coming along slowly but surely, between you and me I was pretty gutted by this PR not going through for a dependency of mine, but hey that’s open-source, what can you do!? man just doesn’t like webpack
Had an awesome ratio too! lol
(that’s from removing
node_modules out of version control! )
Every use-case example helps. Thank you!
Postgres as a container → you mount a file system. I’ve never trusted myself to do this outside local dev
And, woah, that PR experience did not end well. Did you have to fork?
I was already running off my own fork since I needed it, impossible to integrate well without modules! I had tried but I had a whole bunch of race conditions that I couldn’t debug from scripts loading in when they pleased. Even though I was worried that running off my own fork ran the risk of it not being merged I did think it was pretty likely to be merged and I didn’t really have a choice due to the bugs. Now that the repos are somewhat doomed to drift apart we’re now considering dropping support and later once we integrate openscad (which won’t have the same fate do to how it’s integrated) we can always come back and make our own version since me and a few other community members didn’t like some of the opinionated choices in that project anyway.
In the end I can’t be upset, I had this idea kicking around in my head for a while and it was seeing CascadeStudio that gave me the push to try the project and give Redwood a go at the same time.
I’ll be doing a decent amount of research into this in the coming weeks and will be dumping lots of links and notes. I created a separate thread for discussion about how to compare running Redwood on Lambda versus Docker.
That is a slightly different question than the question of what is the best way to run Redwood on Docker. It’ll be better to keep as much of the discussion of Using Docker with Redwood here, as the title so aptly states.
Notes from Jim Fisk on Dockerizing Redwood
Hey Anthony, def happy to share anything we’re doing, there might be a couple of differences though. The first is that since
I’m also using a project called
@GoReleaser to automate the actually deployment to DockerHub, which is a really helpful project, but not sure if that would work with Redwood.
I’ve also only thought about running our containers in CI - after the build we just discard them. I’ve done a little ECS work on a different project that used terraform but I’m definitely not an expert, and I unfortunately haven’t tried ACI yet.
My whole thought with using Docker was to allow people to lock their CI to whatever version of
plenti they want so they can continue building at that version even if we do a new release that breaks the API.
Also give them an easy way to build in CI with a fast cold start so they don’t actually have to download the binary themselves or do other setup that can slow the process down. Sounds like redwood needs a persistent server though?
We have a new
yarn redwood serve api command. Also
yarn redwood serve web is coming as well!
Together, and especially in the case of ‘web’, these will be great building blocks for a vanilla Dockerfile, perfectly capable for prototyping, testing, and deploying quickly. Once a more performant confirmation and infra is needed, those could be added – e.g. PM2, Kubernetes, Nginx (to serve Web/), CDN, etc.