Re: Prisma's lack of `provider = env(...)` support in schema file

Hi there!

I’m a bit bummed by the fact that I have to live with an uncomittable change in my schema.prisma file so that I don’t mess up the datasource provider in production…

I’m all for dev environment as close to production as possible, but for quick prototypes / small projects it’s nice to work with SQLite.

So I’ve been playing around with a hack tonight, and because it’s really hacky I’m hesitant to open a PR. Also, I don’t know if you have any information about Prisma allowing the use of environment variables for datasources provider value? Might not be worth hacking around it if they’re close to releasing something.

Anyway, here’s the idea I’ve been toying with:

  1. added a module in @redwood/cli:
// packages/cli/src/lib/expand-schema-env.js

import fs from 'fs'
import path from 'path'

import { getPaths } from 'src/lib'

export const expandSchemaUnsupportedEnvVariables = () => {
  // Get Prisma's schema file content
  const prismaSchemaPath = path.join(
    getPaths().base,
    'api/prisma/schema.prisma'
  )
  const schemaOriginalContents = fs.readFileSync(prismaSchemaPath, 'utf-8')

  let expandedSchema = schemaOriginalContents

  // Find calls to `env()` in datasources & generator provider values
  let matches = expandedSchema.matchAll(
    /(?<lead>(?:datasource|generator)[^}]+provider[^=]*=\W+)(?<env_call>env\(["'](?<variable>[^"']*)["']\))/gm
  )
  matches = Array.from(matches).reverse()

  for (const match of matches) {
    // Replace each match with the actual environment value
    expandedSchema =
      expandedSchema.slice(0, match.index + match.groups.lead.length) +
      '"' +
      process.env[match.groups.variable] +
      '"' +
      expandedSchema.slice(match.index + match[0].length)
  }

  // Write newly expanded content to the schema file
  fs.writeFileSync(prismaSchemaPath, expandedSchema)

  return () => {
    fs.writeFileSync(prismaSchemaPath, schemaOriginalContents)
  }
}

  1. Updated commands in packages/cli/src/commands/dbCommands to rewrite the schema file right before the execution of the command, and restore it right after.
    For example, the save command’s handler now looks like this:
export const handler = async ({ name, verbose = true }) => {
  const restoreSchema = expandSchemaUnsupportedEnvVariables()

  await runCommandTask(
    [
      {
        title: 'Creating database migration...',
        cmd: 'yarn prisma2',
        args: [
          'migrate save',
          name && `--name ${name}`,
          '--experimental',
        ].filter(Boolean),
      },
    ],
    {
      verbose,
    }
  ).finally(restoreSchema)
}
  1. Updated my .env.defaults file to add DATABASE_PROVIDER=sqlite

  2. Updated my schema’s datasource:

datasource DS {
  provider = env("DATABASE_PROVIDER")
  url = env("DATABASE_URL")
}

This works like a charm locally! (as long as you use env variables that do exist :grimacing:)
And because deploying only involves calling yarn rw db up --no-db-client && yarn rw build and publishing the built files, I have no reason to believe it won’t work on Netlify or elsewhere.

A few notes:

  • Doing this means generated migrations will have a schema.prisma snapshot file that contains provider = "sqlite" and not the env("DATABASE_PROVIDER") call. But that’s actually already the case in the current state of things: migrations generated locally contain the same provider and are not preventing things to work with postgres once deployed.
    I guess the snapshotted file is just here as a reference.

  • yarn rw dev doesn’t work anymore with that schema because it doesn’t delegate generation to the CLI commands but directly calls yarn prisma2 generate --watch. I haven’t pushed the experiment further because I don’t want to spend more time on something that might be deemed completely useless :grin:
    However I think we could make it work by introducing an intermediate watching layer: instead of having Prisma watch the file, we watch it ourselves, expand the env() calls from the file into a temporary file upon detected changes and call yarn prisma2 generate --schema=/path/to/schema.expanded.prisma.
    This way, the original file always looks “clean” to the developer, and the overall devX is improved.

… As I said it’s very hacky but it’s also future-proof: once Prisma catches up with this, just remove the expansion calls from the tooling and nothing will have to change on the developers’ side!

So, there you go… wdyt? ^^

Prisma has an issue open for adding support, and we’ve been bugging them about it for weeks now!: https://github.com/prisma/prisma2/issues/1487 There hasn’t been any commitment from them on a release date. :frowning:

We went down the route of changing this dynamically at build time but it ended up causing more trouble than it was worth—I had it working at deployment time, but prisma’s --watch process for re-generating the client wasn’t covered by my fix…ugh. So we undid all of that and, since prisma said a fix was incoming shortly, figured it was easiest just having an explanation in the tutorial about the current situation and the workaround. But maybe that fix isn’t coming as soon as we hoped.

@thedavid has a potential workaround that involves a slight git workflow change for the developer, he’s going to add it here in a second, see what you think…

Could you manage this with a different development to deploy branch workflow? For example:

  1. Starting with provider = "sqlite" in your code branch master
  2. Create new code branch dev from master, which becomes your local working branch – you’d create other working branches from dev. Effectively, just make sure you commit all changes to dev
  3. On ‘master’ branch, directly commit the provider change to the schema file: provider = "postgresql"
  4. When you’re ready to deploy, PR + merge from dev into master (and definitely take advantage of autodeploy to Netflify)

@thedavid Thanks for the suggestion, that would indeed work!
However, wouldn’t it be better if developers didn’t have to think about this at all?

@rob I’ve seen the issue on Prisma’s repo indeed! :neutral_face:

Note that things seem to be moving though:
image
:slight_smile:

Anyway… given what you told me I couldn’t resist implementing my watching idea.
It’s actually more “elegant” than what I had thought I’d have to do!

From a high level, I:

  • Implemented a --watch flag for the rw db generate CLI
  • Changed the rw dev command to call yarn redwood db generate --watch instead of Prisma directly

As I suggested in my previous post, I modified my expandSchemaUnsupportedEnvVariables function so that it can write the expanded schema to a temporary file.
I refactored the db generate command so that it uses this capability to call Prisma’s generate command with the --schema param pointing to the temp file:

The result is a seamless dev experience allowing me to use sqlite locally and postgresql in production, without having to make any change to my workflow :slightly_smiling_face:

Note that the temporary file has to be in the same dir as the original, as Prisma will use the schema file’s path to determine where to generate the client files…
The file is deleted right away, so that shouldn’t be an issue. It could be added to .gitignore just in case.

I’ve pushed everything to my redwood’s fork on Github, you can check the changes this required on that branch.

I’d be more than happy to open a PR, clean code/add tests & all if you’d like to move on with this solution :slight_smile:

2 Likes

This is an amazing fix to this problem, bravo! Did you see this @peterp?

We have a meeting on Tuesday and one of the Prisma folks usually joins, let me see if I can nail them down on a date for adding the dynamic provider…it would be a shame to put all this work into it and merge it into the codebase just to have them release native support the following week.

I’ll follow up on this thread and let you know what they say.

1 Like

Glad you like it :slight_smile:

Let’s wait for Prisma’s feedback indeed!
If anything, this allowed me to deep dive into some core parts of the project :slight_smile:

This is awesome, thank you @olance!

1 Like

I believe we are going to see some near-term movement from Prisma per conversation I had last Friday. No promises, however. But definitely on their radar as a need + priority.

2 Likes

Looks like things are progressing on Prisma’s side :slight_smile:

@rob from what I understand, you also implemented a Netlify plugin to work around the issue?

1 Like

@olance he did! Have you seen (or tried) the plugin released in this hot-fix?

I’ve seen it, but not tried yet! :slight_smile:

I think I’ll try that tomorrow ^^

1 Like