API Testing and Scenarios

We are extensively using Redwood’s scenario system in our api side tests. Running the api side tests is taking well over 10 minutes, so we have been looking at some ways of optimizing this. To date, we’ve been focussing a bit on reducing the number of database records in the standard scenario setups. We’ve been moving database records that are infrequently used in the tests from the scenarios into a seed function in the specific tests where required. Using this concept, we’ve made a small, minor dent in our test times.

One thing I’ve noticed is that Redwood is, as part of every test wrapped in scenario, creating all the scenario database records and then destroying all the records on each test. However, there are numerous cases where we are not modifying the data as part of the test. For example, tests of queries and all the mutation tests that should fail shouldn’t touch the underlying db records.

What are your thoughts on having a version of the scenario setup that allows nested it test statements within an overall wrapped scenarioSetup function? That is the scenarioSetup function would not actually run any tests, just do the database setup and destroy. Yes, there is some risk of test contamination, but the user could determine when there would be a benefit to this vs the current scenario call.

Something like the following:

scenarioSetup(standard, async (scenario: StandardScenario) => {
  it('does thing A', () => {
     ... test 1
    })

  it('does thing B', () => {
     ... test 2
    })
})

Would appreciate any feedback on this concept.
@rob, I believe you are the original designer of these scenarios… do you have any thoughts on this?
@MichaelrMentele, I understand from your talk at the conference that you’ve possibly taken a different approach than scenarios in your testing, and would love your take on this as well.

I’m pinging @danny here as well, as (IIRC) he did some performance work on scenario testing. Maybe you have some input/ideas to share Danny?

1 Like

Hi @cjReimer,

Welcome back to the forums! Happy to have met you at the conference, so good to be able to associate a real person to an avatar online :rofl:

First of all - this is a great explanation, it makes sense that you’d like to setup the scenario once, and run multiple tests.

Let me try a few things out first, and I’ll get back in a few hours - let’s see if we can get this implemented!

I like this idea! But I think we need to make it very clear to people that they will VERY easily get false positives/negatives if they start modifying records in the database during their tests—subsequent tests become dependent on the changes, and the order in which they run.

The big idea behind scenarios is that each and every test is isolated in its own world and won’t affect each other when making DB changes (which they would if you relied on modifying the database inside an it()).

Maybe we go so far as to a notice/warning during any test run that’s wrapped in scenarioSetup, notifying the user that sceanarioSetup doesn’t clean up between tests, so make sure you’re doing it yourself. It’d be really neat if, inside any code wrapped in scenarioSetup, we could hook into Prisma’s update/create/delete functions and show a warning message if one of those are used, since they’re obviously modifying the state of the DB and will start causing dependent ordering of tests.

1 Like

every test is isolated in its own world and won’t affect each other when making DB changes

Valid point Rob - perhaps its how you frame it though. I’m imagining it more like a “beforeAll” or “describe” block where you can run some common code, then in each “it” block you just write your expectations. Personally when I see beforeAll or describe blocks, I don’t assume between tests it would be reset!

I’ve evaluated two ways of achieving this:

Option 1: setupScenario utility function

You call setupScenario in your beforeAll block, instead of running a scenario block (equivalent to it in regular Jest tests).

Doing it this way is verbose, but it does give you full control over nesting your tests.

Down side: I think we may also need to expose a “tear down” function in the after all to reset the database, and its verbose

describe('posts', () => {
  let scenario: StandardScenario
  beforeAll(async () => {
    scenario = await setupScenario()
   // or a named scenario like
   // scenario = await setupScenario('myOtherScenario')
  })

  // Unsure if we really need this
  afterAll(teardownScenario) 

  it('returns all posts', async () => {
    const result = await posts()

    expect(result.length).toEqual(Object.keys(scenario.post).length)
  })

  it('updates a post', async () => {
    const original = (await post({ id: scenario.post.one.id })) as Post
    const result = await updatePost({
      id: original.id,
      input: { title: 'String2' },
    })
  //...
  })

  describe('You can also nest here', () => {
    it('returns all posts two', async () => {
      const result = await posts()

      expect(result[0].title).toEqual('String')
    })
  })

Option 2: describeScenario

This is essentially what you described above Curtis - but I’d like to include the word “describe” in there because it gives you an indication that it is equivalent to a describe function, which is how you group things in Jest normally.

At the moment, I can’t get it to actually pass the scenario data in though, but I’ll keep digging.

describeScenario('my scenario', (scenario) => {
// currently unable to pass value in ☝️, because of jest lifecycle
 it('xxx', ...)
 it('yyy', ...)
}) 

but it is possible to supply a getter function like this:

describeScenario('contacts', (getScenario: () => StandardScenario) => {
  let scenario
  beforeAll(() => {
    scenario = getScenario()
  })

  it('xxx', () => {
  // or directly here
   const itScenario = getScenario() 
   /...
  })

Let me know what you guys think.

1 Like

Great! Thanks, Danny and Rob!

Danny, I believe I prefer Option 2, the describeScenario alternative you’ve proposed, as it makes it a little easier (and clearer) to configure how and when we want the scenarios created and destroyed within a single test file. I prefer not having to utilize beforeAll if we don’t need to. I know you indicated that it was not straightforward to pass the scenario data through to the describeScenario, but if we can make that work, that would be awesome!

Are there any typescript challenges to getting the scenario types if we need to go to a getScenario() function call?

Rob, I totally agree with you that there are some additional risks, as tests are no longer isolated. My opinion is that this risk can be clearly expressed and warned in the docs. If you feel a warning log is required, I’m not grossly opposed to it, but I don’t know if it’s required. Note that there are cases where we would want to run sequential tests that mutate the data, and not reset in-between. For example, let’s say you have a system with multiple states (state engine). It would be great to run sequential tests that runs an entity through all the different states of the state engine, without requiring each state to be explicitly set up in a scenario.

Thanks for all your work on this. You guys are awesome!

Are there any typescript challenges to getting the scenario types if we need to go to a getScenario() function call?

Should be all possible, no really any different to how we’re doing it currently. Only difference would be the getScenario type, which is just a function that returns your scenario. e.g. () => StandardScenario

My opinion is that this risk can be clearly expressed and warned in the docs.

Agreed with you on this Curtis!


Let me get a PR for this up - would appreciate help writing the docs though Curtis - some examples of how you’d like to make use of this (in relatively real situations) would be better than the cases I’d come up with.

Since this is a new API, I’d like to just confirm with the rest of the team before adding it in!

Absolutely, I’ll help with some docs!

Thanks, Danny!

I like Option 2 as well!

Remember that in the scenario syntax, there’s an optional first argument which is the name of the scenario to load. I would expect that describeScenario should have that same signature!

2 Likes

Note that there are cases where we would want to run sequential tests that mutate the data, and not reset in-between. For example, let’s say you have a system with multiple states (state engine). It would be great to run sequential tests that runs an entity through all the different states of the state engine, without requiring each state to be explicitly set up in a scenario.

Do Jest tests always run in a deterministic order? For some reason I thought they were randomized (I remember some Ruby testing library randomizing them on purpose specifically to make sure that they didn’t rely on mutated state from a previous test run).

It’s been a while since I’ve combed through the TDD dogma, but I remember it being a best practice to make sure that every test is completely isolated and should be able to run on its own with really no other dependencies (like the outcome of previous tests) required.

I remember people moving away from nested describe and beforeAll blocks and back to a “flat” test file specifically because of this: it starts to obfuscate what data is actually in play when your test ends up running. In the long run it was better to have duplicated setup in each test so at least you knew at a glance what you were testing.

But that being said, I’m sure people have different styles of testing, and we shouldn’t make their lives more difficult if we can avoid it! :slight_smile:

Rob,

Thanks for replying and the caution! You are right to be suspicious of Jest test order, and it appears the describe blocks do have a particular sequence to them. It’s clarified in:
https://jestjs.io/docs/setup-teardown#order-of-execution-of-describe-and-test-blocks

With the code Danny has prepared, the tests themselves appear to run in sequence, and so far, I have not encountered an issue with Danny’s test code, although I need to do further testing.

I do agree with you that, in general, independence of tests is appropriate. However, with an abundance of caution, there are some advantages to intentionally running sequential dependent tests in certain scenarios. My thought is that this style of testing to reflect an actual business process being tested may actually catch some side effect bugs that might not be caught with users manually setting up the specific test data for each step in the business process.

So, my opinion is that people should set up tests to be independent as much as practical; however, there is room, with caution, for dependent tests to be utilized. I suspect that there are others who would disagree with me on this, and I’m open to being schooled on this topic! :smile:

Thanks for your help and input on this!

We would also love to have the option 2 as described above for speeding up some of not critical tests.

1 Like

Thanks @matbgn - Option 2 is in PR feat(describeScenario): Seed scenario once and group tests by dac09 · Pull Request #9572 · redwoodjs/redwood · GitHub

But currently we’re discussing whether we do want to introduce the new API. Feel free to add your input and usecases on here or on the PR :slight_smile:

1 Like

If I’m reading this right this will allow us to start using the Jest it.each feature with scenarios which will be really nice! Looking forward to this hitting master!