How to dockerize a local postgres database for tests

Are you using Supabase, Railway or some other cloud Postgres service for your db and want local tests to be… local? But also not a pain in the butt?

Here’s how I containerized a local Postgres database for running tests from yarn rw test.

Requirements

You need this installed on your machine:

How to

1. Add docker-compose.yaml file

Add a docker-compose.yaml file at the project root with this:

version: '3.7'
services:
    postgres:
        image: postgres:10.5
        restart: always
        environment:
          - POSTGRES_USER=postgres
          - POSTGRES_PASSWORD=postgres
        logging:
          options:
            max-size: 10m
            max-file: "3"
        ports:
          - '5438:5432'
        volumes:
          - ./api/db/postgres-test-data:/var/lib/postgresql/data

mostly copied from here

You can set the mapped host port (left side of :) to whatever you like. 5432 is usually taken by some other postgres process so I made it 5438 to avoid collision. You’ll need this port later.

Set up TEST_DATABASE_URL

In .env and .env.example, ensure your TEST_DATABASE_URL is un-commented and points to your dockerized postgres container

TEST_DATABASE_URL=postgresql://postgres:postgres@localhost:5438/postgres

Make sure the port matches whatever port you set to the left of the : in the port property in docker-compose.yaml

3. Add a script to start your test db

Add these scripts in package.json

{
  "scripts": {
    "postgres:up": "docker-compose up -d",
    "postgres:down": "docker-compose down"
  },
}

4. gitginore your volume

Add the volume dir from docker-compose to your .gitingore.

.gitignore

api/db/postgres-test-data

Note: This is for extra safety. In reality, this dir should be ignored by default since you won’t have permission to view it when docker adds it, but just in case you run something like sudo chown -R $(whoami):$(whoami) . in your project, the .gitignore add makes sure you don’t accidentally push all this junk to GitHub.

5. Start postgres test db before running tests

At the beginning of your dev session, run

yarn postgres:up

This will run a postgres container in the background for your yarn rw test runs to connect to.

If you need to stop the process, you can run yarn postgres:down. This might wipe out any data in the db? But it shouldn’t matter since it’s only a container for tests.

Conclusion

I don’t know much about docker but this works so far with no problem for running yarn rw test

3 Likes

Excellent, now I can deploy this Postgres container to Fly and run my local tests on a CDN.

EDIT: FYI I updated the volumes property from docker-compose.yaml to plant the postgres data files in the api/db dir so that rw-api-server will ignore this file from watch

after update

volumes:
          - ./api/db/postgres-test-data:/var/lib/postgresql/data

PS if you run into an error like this:
FATAL: could not open relation mapping file "global/pg_filenode.map"

simply restart your docker container by

  1. Get test container id from docker ps
  2. Run sudo docker restart <CONTAINER_ID>

In case you didn’t know, potentially overkill for most local use-cases, Supabase’s flavor of Postgres is open-sourced.

cool - is that for running supabase locally?

if so, which is easier to setup? and would a local supabase setup work for Railway projects?

Supabase’s image includes a lot of extra stuff like PostGIS, pg_cron, and pgAudit. It’ll be useful for people who have these extensions on their production database and want to make sure their testing database includes the same capabilities.

This will be slightly different from Railway’s Postgres setup. They are using an image from TimeScaleDB that also makes some opinionated decisions on extensions like PostGIS and you can check it out here.

1 Like

Apologies for the late reply, just to add to what @ajcwebdev already said: Supabase’s can be used locally or when self hosting. I use it locally and it isn’t any harder to setup than ‘vanilla’ Postgres is, not sure if the extensions are plug-and-play to be honest.

I also noticed your question here, have you considered using a named volume instead of mounting a directory from the host? That way your database’s persisted data isn’t actually stored alongside the application.

Your docker-compose.yml would just need an update:

version: '3.7'
services:
    postgres:
        image: postgres:10.5
        restart: always
        environment:
          - POSTGRES_USER=postgres
          - POSTGRES_PASSWORD=postgres
        logging:
          options:
            max-size: 10m
            max-file: "3"
        ports:
          - '5438:5432'
        volumes:
          - postgres_data:/var/lib/postgresql/data
volumes:
  postgres_data:

@realStandal thanks for the tip on the named volume - this prevents the volume from mounting in the project.

I’ve been running into an issue where on second run of the tests, a “permission denied” error for the volume shows up. The only solution I’ve gotten to work so far is restarting the container. I added a test script in package.json that restarts before test:

"scripts": {
    "postgres:up": "docker-compose up -d",
    "postgres:down": "docker-compose down",
    "test": "yarn postgres:down && yarn postgres:up && yarn rw test"
  }

Now I can run all tests with

yarn test

api tests with

yarn test api

and web tests with

yarn test web

There is probably a better way to do this by chowning the owner of the volume on postgres:up but haven’t figured that out yet

1 Like