[Guide] Power of GraphQL Caching

RedwoodJS + Envelop Response Cache

Discover the Hidden Power of GraphQL Caching

Note: This Guide was presented at the Redwood 1.0 Release Candidate Meetup on December 9, 2021

Congratulations, you built a RedwoodJS app and everyone :smiling_face_with_three_hearts: it. But, Now :scream: youā€™re getting traffic spikes. You donā€™t want to :disappointed: your users with :turtle: slow performance ā€¦ or even :warning: errors.

Maybe youā€™ve started to see database connection timeouts, or sluggish response times for popular queries?

Yes, you should first investigate why. Perhaps you can optimize your SQL query? Maybe you have a N+1 query that is swamping the database with connections?

But there are situations where caching frequently accessed data is an excellent solution and an option you should consider.

In A brief Introduction to Caching, the Guild observe:

Huge GraphQL query operations can slow down your server as deeply nested selection sets can cause a lot of subsequent database reads or calls to other remote services.

What if we donā€™t need to go through the execution phase at all for subsequent requests that execute the same query operation with the same variables?

A common practice for reducing slow requests is to leverage caching. There are many types of caching available. E.g. We could cache the whole HTTP responses based on the POST body of the request or an in memory cache within our GraphQL field resolver business logic in order to hit slow services less frequently.

With GraphQL such things become much harder and complicated. First of all, we usually only have a single HTTP endpoint /graphql that only accepts POST requests. A query operation execution result could contain many different types of entities, thus, we need different strategies for caching GraphQL APIs.

While there are a few third-party services that offer GraphQL caching like GraphCDN and Layer0 that you could consider, RedwoodJS :evergreen_tree: has an answer, thanks to the :incoming_envelope: Envelop ecosystem, that is easy to test in dev, has no vendor lock-in, and you can bring your own key-value storage to manage costs: the Response Cache.


:person_raising_hand: useResponseCache Plugin

Since RedwoodJS GraphQL supports the Envelop plugin ecosystem, you can easily add the useResponseCache plugin to have a GraphQL Cache in no time.

  • Huge GraphQL query operations can slow down your server with lots of database :open_book: reads or calls to remote services

  • Perfect for lots of read-only data that :hourglass: doesnā€™t change frequently

  • Serverful and Serverless support with Redis backed cache

  • Shared :earth_americas: cache across replicas possible even at edge


SuperSimpleSetupā„¢ in RedwoodJS

Out of the box, the useResponseCache plugin provides a LRU (Least Recently Used ) in-memory cache, which is perfect to use in development or in serverful-deploys.

FYI: The LRU Cache isnā€™t a good choice for serverless deploys since it wonā€™t presses across requests, but donā€™t worry there is a Redis-backed response cache for this weā€™ll look at as well.

To setup the LRU Response Cache:

  • Add @envelop/response-cache to your appā€™s api side

yarn workspace api add @envelop/response-cache

  • In your api/src/functions/graphql.ts import
import { useResponseCache } from '@envelop/response-cache'
  • and also, in createGraphQLHandler add useResponseCache() to you set of extraPlugins
export const handler = createGraphQLHandler({
  loggerConfig: {
    logger,
    options: { operationName: true, tracing: true, query: true },
  },
  directives,
  sdls,
  services,
  extraPlugins: [
    useResponseCache(), // šŸ‘ˆ add to extraPlugins
  ],
  onException: () => {
    // Disconnect from your database with an unhandled exception.
    db.$disconnect()
  },
})
  • Thatā€™s it!
  • Restart your dev server, and you should start seeing cached responses

How do I know if I am caching?

When you query GraphqQL, you should see some response cache information:

  "extensions": {
    "responseCache": {
      "hit": false,
      "didCache": true,
      "ttl": null
    },

This means that the response wasnā€™t found in the cache (hit: false) and therefore it was cached (didCache: true) and it is cached forever* (til: null).

  • ā€œforeverā€: Until invalidated by a mutation or directly. And you can set custom ttlā€™s for all your expiration and invalidation needs.

Subsequent queries will then show:

  "extensions": {
    "responseCache": {
      "hit": true
    },

Which means, the cache was used and your resolver and database query was never invoked! Win!

Note: The responseCache extension information is included by default in development. If you want to include this information in productions then you should add:

includeExtensionMetadata: true,

to your ResponseCache configuration.

Configuring the Cache

Your Response Cache can be extensively configured to:

  • cache only the models, queries you want
  • use custom expiration times for specific models, queries
  • enabling/disabling cache
  • caching per authenticated user
  • showing hit/miss and ttl diagnostic data via includeExtensionMetadata
  • much much more

The best resource for Response Cache configuration are these Recipes.

Redis-backed Cache

But, what if you are running serverless and also need a response cache?

What is Redis? Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache, and message broker.

Youā€™ll need a Redis instance which you can install via homebrew and run locally (and for free); or, you can use a third-party service provider like Upstash, Heroku Redis, Railway Redis or many others (note: these may be paid services).

To implement a Redis-backed cache, start with the same example above, but now:

  • Add @envelop/response-cache-redisand ioredis to your api side via
yarn workspace api add @envelop/response-cache-redis
yarn workspace api add ioredis
  • Create a api/lib/redis.ts to create a redis client (similar to how you have a Prisma client).
import Redis from 'ioredis'

export const redis = new Redis(process.env.REDIS)
  • And be sure to set your Redis connection info in your envars which will be local or the connection info you provided to you. (Note: rediss is for SSL connections).

#REDIS=rediss://:pwd@host:PORT
REDIS=redis://localhost:6379
  • Last, in your GraphQLHandler
import { createRedisCache } from '@envelop/response-cache-redis'
import { redis } from 'src/lib/redis'

// ...
  extraPlugins: [
    useResponseCache({ cache: createRedisCache({ redis }) }),
]
/// ..

And now, your plugin will use Redis instead to cache and invalidate your responses.

Note: It can be helpful to create api/src/lib/responseCache.ts to initiate your response cache and export a responseCacheConfig to use in the plugin like useResponseCache(responseCacheConfig)


:hammer_and_pick: Caching Powers Unearthed

  • :racing_car: Speed! Your response times drop into the low msecs!

  • Reduce database load. Give your db :person_in_lotus_position:t2: breathing room to do the hard stuff

  • No :closed_lock_with_key: vendor lock-in. Services like GraphCDN are terrific, but you can manage your own

  • Save :dollar: since may not need to move to large pricier dbs and reduce function invocation with edge


TeamStream Case Study

left

  • TeamStream saw traffic spikes around start of events when users jump in to watch

  • Event data doesnā€™t change much ā€” especially after too

  • Needed a Serverless solution


Prisma Invalidation Middleware

The following is an example of how one might use Prisma middleware to manually invalidate entities when Prisma modifies data.

The handlePrismaInvalidation function considers several actions (update, upsert, etc) that would be acted upon some target models that you wish to manually invalidate.

// api/src/lib/responseCache.ts

import { createRedisCache } from '@envelop/response-cache-redis'

import { logger } from './logger'
import { redis } from 'src/lib/redis'

const EXPIRE_IN_SECONDS =
  (process.env.EXPIRE_IN_SECONDS && parseInt(process.env.EXPIRE_IN_SECONDS)) ||
  30

export const isPrismaMiddlewareInvalidationEnabled =
  process.env.ENABLE_PRISMA_MIDDLEWARE_INVALIDATION === 'true'

const enableCache = (context) => {
  const enabled = context.request.headers['enable-response-cache']
  if (enabled && enabled === 'true') return true
  if (enabled && enabled !== 'true') return false
  return true
}

// Create the Redis Cache
export const cache = createRedisCache({ redis })

// Configure the Response Cache
export const responseCacheConfig = {
  enabled: (context) => enableCache(context),
  cache,
  invalidateViaMutation: !isPrismaMiddlewareInvalidationEnabled,
  ttl: EXPIRE_IN_SECONDS * 1000,
  includeExtensionMetadata: true,
}

const ACTIONS_TO_INVALIDATE = [
  'update',
  'updateMany',
  'upsert',
  'delete',
  'deleteMany',
]

const MODELS_TO_INVALIDATE = [
  'Album',
  'Artist',
  'Customer',
  'Employee',
  'Genre',
  'Invoice',
  'InvoiceLine',
  'MediaType',
  'Playlist',
  'Track',
]

export const buildPrismaEntityToInvalidate = ({ model, id }) => {
  return { typename: model, id }
}

export const buildPrismaEntitiesToInvalidate = ({ model, ids }) => {
  return ids.map((id) => {
    return buildPrismaEntityToInvalidate({ model, id })
  })
}

export const handlePrismaInvalidation = async (params) => {
  const model = params.model
  const action = params.action
  // simple where with id
  const id = params.args?.where?.id
  // handles updateMany where id is in a list
  const ids = params.args?.where?.id?.in

  const isActionToInvalidate = ACTIONS_TO_INVALIDATE.includes(action)

  if (isActionToInvalidate && model && id) {
    const isModelToInvalidate = MODELS_TO_INVALIDATE.includes(model)

    if (isActionToInvalidate && isModelToInvalidate) {
      const entitiesToInvalidate = []

      if (ids) {
        ids.forEach((id) => {
          entitiesToInvalidate.push(
            buildPrismaEntityToInvalidate({ model, id })
          )
        })
      } else {
        entitiesToInvalidate.push(buildPrismaEntityToInvalidate({ model, id }))
      }

      logger.debug(
        { action, model, entitiesToInvalidate },
        'Invalidating model'
      )
      await cache.invalidate(entitiesToInvalidate)
    }
  }
}

which is used by

// api/src/lib/db.ts

// See https://www.prisma.io/docs/reference/tools-and-interfaces/prisma-client/constructor
// for options.

import { PrismaClient } from '@prisma/client'

import { emitLogLevels, handlePrismaLogging } from '@redwoodjs/api/logger'

import {
  handlePrismaInvalidation,
  isPrismaMiddlewareInvalidationEnabled,
} from './responseCache'
import { logger } from './logger'

/*
 * Instance of the Prisma Client
 */
export const db = new PrismaClient({
  log: emitLogLevels(['query', 'info', 'warn', 'error']),
})

handlePrismaLogging({
  db,
  logger,
  logLevels: ['query', 'info', 'warn', 'error'],
})

if (isPrismaMiddlewareInvalidationEnabled) {
  db.$use(async (params, next) => {
    await handlePrismaInvalidation(params)
    const result = await next(params)
    return result
  })
}

specifically:

if (isPrismaMiddlewareInvalidationEnabled) {
  db.$use(async (params, next) => {
    await handlePrismaInvalidation(params)
    const result = await next(params)
    return result
  })
}

which says to use handlePrismaInvalidation as middleware.

Now, when, say an Album is updated, an buildPrismaEntityToInvalidate is constructed and then await cache.invalidate(entitiesToInvalidate) manually invalidates that entity.


Whatā€™s Next?


Download the Slides

redwood-graphql-caching.pdf (768.0 KB)

12 Likes

Thank you for posting this!

I believe Iā€™ve set this up, but Iā€™m only seeing the below included in Network tab in development and not in production. Is this expected behavior or are you also able to see this in your production app?

"extensions": {
    "responseCache": {
      "hit": true
    },
    ...
}

For context, Iā€™m using Redis hosted by Upstash with my app hosted by Netlify. And I believe Iā€™ve messed something up because - if Iā€™m reading them correctly - the Upstash stats seem to indicate that Iā€™m storing data in the cache, but not ever retrieving it:

Any thoughts would be appreciated! Otherwise Iā€™ll follow up when I pick this back up with fresh eyes. And thanks again ā€“ getting an easy caching solution up is a HUGE help :slight_smile:

Yes, you will see the extensions.responseCache hit or miss in the GraphQL execution response ā€“ (not inside the data, but you will see it if outputting the GraphQL request with a Query from Paw of Insomnia.

Few things:

  1. Could you share your ResponseCache config? Is your ttl too low? Is the info expiring too quickly?
  2. Are there any errors logged in Production?
  3. Do your models have ids (and are the named id)? And your queries return ids ā€“ the cache key needs the id?
  4. Do your queries use Operation Names? Those are helpful.
  5. If you Query Redis, do you see data?
1 Like

Thanks for the quick response!

Could you share your ResponseCache config? Is your ttl too low? Is the info expiring too quickly?

Here is my response config ā€“ Iā€™ve attempted to keep it minimal to start and only cache responses for a certain query. Iā€™m attempting to cache them for quite a while (30 days,) which is probably a bit unusual.

useResponseCache({
    cache: createRedisCache({ redis }),
    ttl: 0,
    ttlPerSchemaCoordinate: {
      "Query.redwood": 0,
      "Query.jobPost": 2592000000, // Cache job posts for 30 days
    },
  }),

Are there any errors logged in Production?

Nope, none that I saw last night though Iā€™ll take a closer look when I have some more time to investigate.

Do your models have ids (and are the named id)? And your queries return ids ā€“ the cache key needs the id?

Nope, the model uses uuids instead of ids. Also, the query can take either a uuid or a slug and returns an object. And Iā€™ve only used this query (while caching was up) in a way that passes it a slugā€¦ I wouldnā€™t be too surprised if this is the issue. Hereā€™s the query in the sdl file for clarity:

type Query {
    jobPost(uuid: String, slug: String): JobPost! @skipAuth
}

Do your queries use Operation Names?

This query does not use operation names since the query name itself if so explicit, but Iā€™m happy to add one during the debugging process.

If you Query Redis, do you see data?

Iā€™ll follow up on this one when I have a bit more time to investigate (either tonight or a little later in the week.)

Thank you again for your help and quick response!

Aha, can you try Customize the fields that are used for building the cache ID:

const getEnveloped = envelop({
  plugins: [
    // ... other plugins ...
    useResponseCache({
      // use custom identifiers instead of `id` field.
      idFields: ['uuid', 'slug'],
    }),
  ],
});

The identifier is important for constructing the cacheKey and perhaps that is not letting it find the item in the cache.

1 Like

Ah, definitely. Iā€™ll give that a shot later tonight and will follow up with results. Thank you @dthyresson!

1 Like

I thiiiiiink Iā€™m good to go now.

It seemed that adding in idFields: ['uuid', 'slug'] didnā€™t make a difference, so I switched away from Upstash and toward Railway, though in retrospect I think there might not be much of a difference other than Railwayā€™s cleaner interface and metrics that update faster.

Railways metrics seem to show that, actually, I am both storing (inbound) and retrieving (outbound) data from Redis. (I suspect this was the case for Upstash, too, after seeing what I believe were delayed metrics from them this morning.):
Screen Shot 2021-12-20 at 6.59.36 PM

And Iā€™ve confirmed that individual objects that Iā€™m expecting to store are getting stored in Redis. So AFAICT it seems that the cache is working, but Iā€™m only able to view the below responseCache payload in development and not in production.

responseCache: {
    "hit": false,
    "didCache": true,
    "ttl": 2592000000
}

@dthyresson does this sound feasible to you or do you think I might be missing something? I havenā€™t noticed a huge performance boost, so Iā€™m open to the idea that Iā€™m not quite right.

No, that should be present in all environments.

But actually there may be a config setting for this let me check the docs again ā€” and maybe the is on in dev by default.

Ah there is:

You need to add:

includeExtensionMetadata: true,

That should include this now in production.

1 Like

Thank you!

There is a new Redis client from Upstash that could work well in a serverless world that is worth trying as a ioredis replacement:

upstash/upstash-redis

From them:

Upstash Redis

An HTTP/REST based Redis client built on top of Upstash REST API.

It is the only connectionless (HTTP based) Redis client and designed for:

  • Serverless functions (AWS Lambda ā€¦)
  • Cloudflare Workers (see the example)
  • Fastly Compute@Edge
  • Next.js, Jamstack ā€¦
  • Client side web/mobile applications
  • WebAssembly
  • and other environments where HTTP is preferred over TCP.

See the list of APIs supported.

1 Like

@dthyresson have you given any thought to how we might add support for upstash-redis in createRedisCache or whether thereā€™s a better path forward for using upstash-redis with useResponseCache?

I might have some time to work on a PR for this on Saturday, but obviously donā€™t have as much context as you do and wouldnā€™t want to spin my wheels going in an unhelpful direction.

I just ran a very quick local test for using it by replacing my redis.ts with the below:

import Redis, { auth } from '@upstash/redis'

auth(process.env.UPSTASH_URL, process.env.UPSTASH_TOKEN)

export const redis = Redis

And it seems that I was able to send requests to Upstash, but not save any data:

And within my app I started displaying errors within my cells:

Screen Shot 2022-01-19 at 3.17.17 PM

[quote=ā€œtctrautman, post:12, topic:2624ā€]
I just ran a very quick local test for using it by replacing my redis.ts

Interesting. Thatā€™s how I would have set it up as well.

I havenā€™t tried @upstash/redis yet (just found out about it) but Iā€™ll put on my list to test it.

I started displaying errors within my cells:

When you look at the response coming back, itā€™s likely not JSON,. Is there any info in the response payload that can point to what might be going on?

One thing to check - there is a write and a read-only token:

You may want to confirm that you are using a token that can write to Upstash.

1 Like

@tctrautman

So, I just had a look and I think the issues is that the envelop redis cache needs a Redis instance (to do get and set and smember), but the Upstash client is configured a bit differently:

import { auth, set } from '@upstash/redis';

auth('UPSTASH_REDIS_REST_URL', 'UPSTASH_REDIS_REST_TOKEN');

    set('key', 'value').then(({ data }) => {
      console.log(data);
      // -> "OK"
    });

One uses their auth and then need to import get and set and others from it to use.

So, I wonder if the plugin needs a different cache mechanism to support the Upstash client.

Iā€™ll chat with the Guild and get their thoughts ā€“ but at the moment I donā€™t think they are compatible.

Redis is making connections and the Upstash client is making REST requests.

1 Like

Some updates ā€“ turns out it is possible to use the Upstash SDK as a client:

But, will have to await them to support Redis pipelining which the plugin needs. They say ~2 weeks, so will check back and then document.

1 Like

@dthyresson wow, thatā€™s great news, thank you for getting such a quick update. Looking forward to this!

Hello,

I followed the tutorial. I got response caching to work but I always get

{
"hit": false,
"didCache": true
"ttl": 30000
}

I always seem to be able to store the cache in Redis but am not able to actually fetch data correctly? I never got a result where I got hit:true Can anyone point me in the right direction? Thanks!

Hmm, I donā€™t think I ever ran into that issue @pacholoamit, but here are some things Iā€™d look into:

  1. Are you sure youā€™re sending requests within the ttl? Have you tried extending the ttl?
  2. Have you tried accessing Redis via the CLI and examining what exists in the cache after you send a request that gets cached? Do you see a new record added after your request? Does the key line up with what youā€™re sending from the client?

But, will have to await them to support Redis pipelining which the plugin needs

@dthyresson do you know whether The Guild was able to make the necessary changes to createResponseCache that would allow Upstashā€™s Redis client to be used? If not, no worries ā€“ just curious as I might take another swing at this over the next few weeks.

Some work on that PR began recently. See: Support Upstash Redis REST SDK in useResponseCache with Redis cache as store Ā· Issue #1218 Ā· n1ru4l/envelop Ā· GitHub

Would you be able to help the author test?

Thanks!

1 Like