Replacing the Redwood testing infra for the API, with a fast version

Redwood’s testing infra is solid, and you can get a lot out of following the grooves they’ve set up. However, for me, it is slow, so after trying it out a bit - I stopped writing any tests.

This “ignore writing tests” technique, uh, doesn’t scale with codebase complexity. I need to be able to run all tests on a pre-commit without feeling like I should leave the terminal tab. So, what do?

Once my work on type generation was stable, I started to dig into recreating the testing environment from scratch. I’m now a few re-writes in, have sheared it to the bare minimums and have stablized on a testing technique which has: fast tests, runs in wallaby.js, and the tests themselves feel like they’re at the right abstraction layer.

The technique

Config

First you have to change your jest config to not use any of the redwood config, this is done by editing the jest.config.js files redwood sets up. The web one I just commented out entirely. My api/jest.config.js looks like:

const path = require("path")

const { getPaths } = require("@redwoodjs/internal/dist/paths")
const rwjsPaths = getPaths()

const config = {
  rootDir: rwjsPaths.base,
  roots: [path.join(rwjsPaths.api.src)],
  displayName: {
    color: "blueBright",
    name: "api",
  },
  modulePaths: ["<rootDir>/api/"],
  transform: {
    "^.+\\.(t|j)sx?$": ["@swc/jest"],
  },
}

if (process.env.DEBUG?.includes("wallaby")) config.runner = "jest-runner"

module.exports = config

This sets up the paths correctly, and tells jest to use swc to transpile the TS/JS files. You likely need to run yarn workspace api add @swc/core @swc/jest. You could also use the esbuild + Jest infra instead. This will now run your test files with zero Redwood runtime trickery.

Testing Whats

To test a resolver with no faff, you need 3 essentials:

  1. The right types for your resolver exports
  2. A way to handle the prisma
  3. A way to handle the globals from Redwood

OK then:

  1. I made it so that my type generator would give only one potential export for an async resolver from a service in 1.1.0 of @orta/redwood-codegen-api-types meaning you don’t have to do any narrowing (or use an as) to get the right types for a resolver in a test file.

  2. After bouncing through many ideas (‘mock all API calls’, ‘use sqlite version in tests’, ‘DI in a replacement for db’) I eventually settled on deciding I would have to build my own in-memory replica of the prisma API - I was delighted to find others who had made the same call already in prisma-mock. I sent a minor PR or two their way to get a few of my usecases in and it has been very solid.

  3. This one is easy, set the globals yourself via a function:

    const setContext = (context: { currentUser?: Partial<InferredCurrentUser> }) => {
      // @ts-ignore
      global.context = context
    }
    

    This is effectively what that API is doing under the hood anyway.

Testing Hows

I’ll show you two test files to use as examples. These files show everything in one file, obviously you’re free to abstract how you want.

This first one is really all about how reads from the db, so we only set up the prisma db once in a beforeAll to save time:

import { Prisma } from "@prisma/client"
import { mockDeep } from "jest-mock-extended"
import createPrismaMock from "prisma-mock"

import { db } from "src/lib/db"

import { generateNewsItems } from "./generateNewsItems"
 
// We make sure that the import for `db` returns a deep mock
// (which is a mock + proxies for arbitrary read/writes)

jest.mock("src/lib/db", () => ({
  db: mockDeep(),
}))

// Our setup code, which sets up the db with the all of the 
// fixtured data:

beforeAll(async () => {
  await createPrismaMock({}, Prisma.dmmf.datamodel, db as any)
  await Promise.all([
    db.user.createMany({ data: users }),
    db.game.createMany({ data: games }),
    db.stats.createMany({ data: stats }),
  ])
})

describe(generateNewsItems.name + " with fixtured data", () => {
  it("gets stats for a particular game", async () => {
    const game = await db.game.findFirstOrThrow({
      where: { slug: "tic-tac-toe" },
      include: { game: true, stats: true },
    })

    const items = await generateNewsItems(game)
    expect(allItems).toMatchInlineSnapshot(`[...]`)
  })

  // ... and so on for tests
})
const games: Prisma.GameCreateManyArgs["data"] = [...]
const users: Prisma.UsersCreateManyArgs["data"] = [...]

// ... and so on for fixtures

I’ve been fixturing with real data, so I use the JSON export for the useful rows from my db and then paste them into the arrays seen at the bottom of the file. If/when I re-use them, I’ll move them into their own file.

The next test suite is more of a ‘check the thing changed’ set of tests

import { Prisma } from "@prisma/client"
import { mockReset } from "jest-mock-extended"
import createPrismaMock from "prisma-mock"

import { db } from "src/lib/db"
import { Game } from "src/lib/types/shared-return-types"

// yikes right? but you'd likely hide this with setContext in a test/lib folder 
import { InferredCurrentUser } from "../../../../.redwood/types/includes/all-currentUser"

import { updateGame } from "./updateGame"

// Create a new db for each test in the file
beforeEach(() => {
  mockReset(db)
  return createPrismaMock({}, Prisma.dmmf.datamodel, db as any)
})

const setContext = (context: { currentUser?: Partial<InferredCurrentUser> }) => {
  // @ts-ignore
  global.context = context
}

describe(updateGame.name, () => {
  let game: Game
  beforeEach(async () => {
    game = await db.game.create({
      data: {
        id: "tic-tac-toe:game",
        slug: "tic-tac-toe",
        state: "WIP"
      },
    })
  })

  it("lets admins update state", async () => {
    setContext({ currentUser: { id: "user-123", roles: ["user", "admin"] } })

    const updatedGame = await updateGame({ id: game.id, input: { state: "Ready" } })
    expect(updatedGame!.state).toEqual("Ready")
  })

  it("throws an error if the user is not an admin", async () => {
    setContext({ currentUser: { id: "user-123", roles: ["user"] } })

    await expect(updateGame({ id: game.id, input: { state: "Accepted" } })).rejects.toThrow(
      "You don't have permission to change that"
    )
  })
})

As things are changing in this one, we use a beforeEach to clear the db on each test, and then the describe to make a new object to see how it changes. Optionally you can use the prisma APIs to re-grab the object in the test, but that’s all your call.


This setup to me drops a tonne of features from Redwood’s testing suite, in favour of speed and lets you write tests which look like your app code - the types are nearly all handled for you, and you’re mostly relying on understanding how Jest works to test your app instead. Trade-offs.

8 Likes

Yet another impressive guide on using Redwood at scale.

We build in a lot of patterns, so you can :ship: tested, maintanable code quickly - but as Orta points out sometimes this comes with a tradeoff - in this case speed of running your test suite.

Orta’s approach covers relations - which was my main issue with mocking before - and means that you can run your api side tests without using scenarios to seed a test database, but at the cost of having to maintain the JSON exports from your DB. You also get to run your tests in parallel, because the DB doesn’t need to be reset and seeded first - which is a major advantage over scenarios.

Just a reminder that the use of sharding for example with yarn rw test api --shard 1/5 (loop from 1 to 5, or run them in parallel) still holds true, and you will probably cut down even more time from your CI. Ofcourse, in your project you decide how many shards you want - 5 was just an example.

Thanks again @orta for sharing with the community.

1 Like

Hey @orta , this looks great, nice work :tada: does this help with memory leak too? Using redwood test helpers causes big memory leaks in our app (see github issue [Bug]: `@redwoodjs/testing/config/jest/api` causes a memory leak on api side tests · Issue #6322 · redwoodjs/redwood · GitHub), which we solved by parallelising tests, but this obviously doesn’t scale, as we add more tests individual shards run out of memory so we need to add more and more parallel runs, even though tests are relatively fast. This adds significant cost to our CI bills

No promises that it doesn’t fix it (the issue could come inside the app-level code), but it is a totally vanilla Jest project, so the heap seems to be bouncing like you’d expect:

> yarn jest --runInBand --logHeapUsage

 PASS   api  api/src/services/xxx/xxxtest.ts (323 MB heap size)
 PASS   api  api/src/lib/tasks/completion/xxx.test.ts (227 MB heap size)
 PASS   api  api/src/services/xxx/xxx.test.ts (294 MB heap size)
 PASS   api  api/src/lib/tasks/completion/xxx.test.ts (201 MB heap size)
 PASS   api  api/src/services/xxx/xxx.test.ts (265 MB heap size)
 PASS   api  api/src/lib/mailer/xxx.spec.ts (312 MB heap size)
 PASS   api  api/src/services/xxx/xxx.test.ts (322 MB heap size)
 PASS   api  api/src/lib/tasks/completion/xxx.test.ts (340 MB heap size)

Test Suites: 8 passed, 8 total
Tests:       52 passed, 52 total
Snapshots:   9 passed, 9 total
Time:        5.332 s, estimated 16 s
Ran all test suites.