dataMigrate recommended way in docker setup

Hi,
I am wondering how to use dataMigrate properly.
As far as I can see there are two ways I could apply data migrations. I either do it on the server or I do it in my local IDE while being connected to remote server.
Now we have a docker setup and reduce the file size. It seems I would need to install the rw cli, ship dataMigrations folder and maybe even all src files dataMigrations are not build and also include necessary seed files. It seems there is also an undocumented way to use prisma client from dist folder. But this would not solve the issue that dataMigrations are not being build. fix(data-migrate): provide option to use db in dist instead of src by jtoar · Pull Request #8375 · redwoodjs/redwood · GitHub
All those steps would unnecessarily increase the docker size.

The alternative of applying dataMigrations from local environment is working. But it does not feel safe.

Any recommendations?

Ideally it would be it’s own container, that you can then shut down again, but we’ve settled for the docker approach.

Basically running this, when we start.

#!/bin/sh

# NOTE: Requires binaries. Install with:
# yarn global add @redwoodjs/api-server @redwoodjs/internal prisma

prisma migrate deploy --schema=/app/api/db/schema.prisma
npx @redwoodjs/cli-data-migrate
rw serve

One thing that’s not as nice, is, that you need to be careful, to not have a deploy, where you migrate data and also change the schema (remove) of that table. But depending on how often you deploy, it’s more ore less annoying.

3 Likes

Thanks for your reply!

Fortunately the hickup with removing something form schema is also mentioned in redwood docs. We will be careful :slight_smile:

But we run serve and deploy at the end of the docker build process. We won´t have all build files in that moment anymore. Would be interesting, if it is possible to move this process to an earlier phase of the build process. Not sure if this is great though. :confused:

@dennemark I did some work on data migrate and @razzeee gave me feedback at the time (thank you again!); making it a standalone executable via npx was the furthest I’d gotten with it the last time I worked on it to avoid needing the whole CLI binary and its dependencies in the image which is huge. Working on it is confounded by the fact that it relies heavily on Babel to transpile on-the-fly (the files aren’t built as you noted), which also introduced hard-to-debug dependency errors. Where are you deploying out of curiosity? Most deploy providers don’t allow more than one image; not sure if you have a pipeline that’s more capable?

The npx approach is nice! going to have a look at it. But wouldn´t it make sense to also being able to build the migration folder? I think it would help when using a docker.

In our case we have a docker setup that works close to the one provided by redwood ( we want to switch to the rw one ), meaning separate docker for api and web, removing a lot of clutter to keep image small. And we host it on hetzner server.

Yeah I’ll have to revisit why we don’t build them; I can’t remember if it’s just how we used to do things or if there was actually a blocking reason. The only subtlety I can think of is that if you use any dependencies in your data migration scripts, they should probably be listed as production dependencies not dev dependencies just so that they’re available in the production image.

Another alternative also seems to be the use of console docker container, if the exp. redwood docker setup is used. This seems to take the whole repository and then it can be used to run scripts. But it will be a huge container.