RedwoodJS + Envelop Response Cache
Discover the Hidden Power of GraphQL Caching
Note: This Guide was presented at the Redwood 1.0 Release Candidate Meetup on December 9, 2021
Congratulations, you built a RedwoodJS app and everyone it. But, Now youāre getting traffic spikes. You donāt want to your users with slow performance ā¦ or even errors.
Maybe youāve started to see database connection timeouts, or sluggish response times for popular queries?
Yes, you should first investigate why. Perhaps you can optimize your SQL query? Maybe you have a N+1 query that is swamping the database with connections?
But there are situations where caching frequently accessed data is an excellent solution and an option you should consider.
In A brief Introduction to Caching, the Guild observe:
Huge GraphQL query operations can slow down your server as deeply nested selection sets can cause a lot of subsequent database reads or calls to other remote services.
What if we donāt need to go through the execution phase at all for subsequent requests that execute the same query operation with the same variables?
A common practice for reducing slow requests is to leverage caching. There are many types of caching available. E.g. We could cache the whole HTTP responses based on the POST body of the request or an in memory cache within our GraphQL field resolver business logic in order to hit slow services less frequently.
With GraphQL such things become much harder and complicated. First of all, we usually only have a single HTTP endpoint
/graphql
that only acceptsPOST
requests. A query operation execution result could contain many different types of entities, thus, we need different strategies for caching GraphQL APIs.
While there are a few third-party services that offer GraphQL caching like GraphCDN and Layer0 that you could consider, RedwoodJS has an answer, thanks to the Envelop ecosystem, that is easy to test in dev, has no vendor lock-in, and you can bring your own key-value storage to manage costs: the Response Cache.
useResponseCache Plugin
Since RedwoodJS GraphQL supports the Envelop plugin ecosystem, you can easily add the useResponseCache
plugin to have a GraphQL Cache in no time.
-
Huge GraphQL query operations can slow down your server with lots of database reads or calls to remote services
-
Perfect for lots of read-only data that doesnāt change frequently
-
Serverful and Serverless support with Redis backed cache
-
Shared cache across replicas possible even at edge
SuperSimpleSetupā¢ in RedwoodJS
Out of the box, the useResponseCache
plugin provides a LRU (Least Recently Used ) in-memory cache, which is perfect to use in development or in serverful-deploys.
FYI: The LRU Cache isnāt a good choice for serverless deploys since it wonāt presses across requests, but donāt worry there is a Redis-backed response cache for this weāll look at as well.
To setup the LRU Response Cache:
- Add
@envelop/response-cache
to your appāsapi
side
yarn workspace api add @envelop/response-cache
- In your
api/src/functions/graphql.ts
import
import { useResponseCache } from '@envelop/response-cache'
- and also, in
createGraphQLHandler
adduseResponseCache()
to you set ofextraPlugins
export const handler = createGraphQLHandler({
loggerConfig: {
logger,
options: { operationName: true, tracing: true, query: true },
},
directives,
sdls,
services,
extraPlugins: [
useResponseCache(), // š add to extraPlugins
],
onException: () => {
// Disconnect from your database with an unhandled exception.
db.$disconnect()
},
})
- Thatās it!
- Restart your dev server, and you should start seeing cached responses
How do I know if I am caching?
When you query GraphqQL, you should see some response cache information:
"extensions": {
"responseCache": {
"hit": false,
"didCache": true,
"ttl": null
},
This means that the response wasnāt found in the cache (hit: false
) and therefore it was cached (didCache: true
) and it is cached forever* (til: null
).
- āforeverā: Until invalidated by a mutation or directly. And you can set custom ttlās for all your expiration and invalidation needs.
Subsequent queries will then show:
"extensions": {
"responseCache": {
"hit": true
},
Which means, the cache was used and your resolver and database query was never invoked! Win!
Note: The responseCache
extension information is included by default in development. If you want to include this information in productions then you should add:
includeExtensionMetadata: true,
to your ResponseCache configuration.
Configuring the Cache
Your Response Cache can be extensively configured to:
- cache only the models, queries you want
- use custom expiration times for specific models, queries
- enabling/disabling cache
- caching per authenticated user
- showing hit/miss and ttl diagnostic data via
includeExtensionMetadata
- much much more
The best resource for Response Cache configuration are these Recipes.
Redis-backed Cache
But, what if you are running serverless and also need a response cache?
What is Redis? Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache, and message broker.
Youāll need a Redis instance which you can install via homebrew and run locally (and for free); or, you can use a third-party service provider like Upstash, Heroku Redis, Railway Redis or many others (note: these may be paid services).
To implement a Redis-backed cache, start with the same example above, but now:
- Add
@envelop/response-cache-redis
andioredis
to yourapi
side via
yarn workspace api add @envelop/response-cache-redis
yarn workspace api add ioredis
- Create a
api/lib/redis.ts
to create a redis client (similar to how you have a Prisma client).
import Redis from 'ioredis'
export const redis = new Redis(process.env.REDIS)
- And be sure to set your Redis connection info in your envars which will be local or the connection info you provided to you. (Note:
rediss
is for SSL connections).
#REDIS=rediss://:pwd@host:PORT
REDIS=redis://localhost:6379
- Last, in your GraphQLHandler
import { createRedisCache } from '@envelop/response-cache-redis'
import { redis } from 'src/lib/redis'
// ...
extraPlugins: [
useResponseCache({ cache: createRedisCache({ redis }) }),
]
/// ..
And now, your plugin will use Redis instead to cache and invalidate your responses.
Note: It can be helpful to create
api/src/lib/responseCache.ts
to initiate your response cache and export aresponseCacheConfig
to use in the plugin likeuseResponseCache(responseCacheConfig)
Caching Powers Unearthed
-
Speed! Your response times drop into the low msecs!
-
Reduce database load. Give your db breathing room to do the hard stuff
-
No vendor lock-in. Services like GraphCDN are terrific, but you can manage your own
-
Save since may not need to move to large pricier dbs and reduce function invocation with edge
TeamStream Case Study
-
TeamStream saw traffic spikes around start of events when users jump in to watch
-
Event data doesnāt change much ā especially after too
-
Needed a Serverless solution
Prisma Invalidation Middleware
The following is an example of how one might use Prisma middleware to manually invalidate entities when Prisma modifies data.
The handlePrismaInvalidation
function considers several actions (update, upsert, etc) that would be acted upon some target models that you wish to manually invalidate.
// api/src/lib/responseCache.ts
import { createRedisCache } from '@envelop/response-cache-redis'
import { logger } from './logger'
import { redis } from 'src/lib/redis'
const EXPIRE_IN_SECONDS =
(process.env.EXPIRE_IN_SECONDS && parseInt(process.env.EXPIRE_IN_SECONDS)) ||
30
export const isPrismaMiddlewareInvalidationEnabled =
process.env.ENABLE_PRISMA_MIDDLEWARE_INVALIDATION === 'true'
const enableCache = (context) => {
const enabled = context.request.headers['enable-response-cache']
if (enabled && enabled === 'true') return true
if (enabled && enabled !== 'true') return false
return true
}
// Create the Redis Cache
export const cache = createRedisCache({ redis })
// Configure the Response Cache
export const responseCacheConfig = {
enabled: (context) => enableCache(context),
cache,
invalidateViaMutation: !isPrismaMiddlewareInvalidationEnabled,
ttl: EXPIRE_IN_SECONDS * 1000,
includeExtensionMetadata: true,
}
const ACTIONS_TO_INVALIDATE = [
'update',
'updateMany',
'upsert',
'delete',
'deleteMany',
]
const MODELS_TO_INVALIDATE = [
'Album',
'Artist',
'Customer',
'Employee',
'Genre',
'Invoice',
'InvoiceLine',
'MediaType',
'Playlist',
'Track',
]
export const buildPrismaEntityToInvalidate = ({ model, id }) => {
return { typename: model, id }
}
export const buildPrismaEntitiesToInvalidate = ({ model, ids }) => {
return ids.map((id) => {
return buildPrismaEntityToInvalidate({ model, id })
})
}
export const handlePrismaInvalidation = async (params) => {
const model = params.model
const action = params.action
// simple where with id
const id = params.args?.where?.id
// handles updateMany where id is in a list
const ids = params.args?.where?.id?.in
const isActionToInvalidate = ACTIONS_TO_INVALIDATE.includes(action)
if (isActionToInvalidate && model && id) {
const isModelToInvalidate = MODELS_TO_INVALIDATE.includes(model)
if (isActionToInvalidate && isModelToInvalidate) {
const entitiesToInvalidate = []
if (ids) {
ids.forEach((id) => {
entitiesToInvalidate.push(
buildPrismaEntityToInvalidate({ model, id })
)
})
} else {
entitiesToInvalidate.push(buildPrismaEntityToInvalidate({ model, id }))
}
logger.debug(
{ action, model, entitiesToInvalidate },
'Invalidating model'
)
await cache.invalidate(entitiesToInvalidate)
}
}
}
which is used by
// api/src/lib/db.ts
// See https://www.prisma.io/docs/reference/tools-and-interfaces/prisma-client/constructor
// for options.
import { PrismaClient } from '@prisma/client'
import { emitLogLevels, handlePrismaLogging } from '@redwoodjs/api/logger'
import {
handlePrismaInvalidation,
isPrismaMiddlewareInvalidationEnabled,
} from './responseCache'
import { logger } from './logger'
/*
* Instance of the Prisma Client
*/
export const db = new PrismaClient({
log: emitLogLevels(['query', 'info', 'warn', 'error']),
})
handlePrismaLogging({
db,
logger,
logLevels: ['query', 'info', 'warn', 'error'],
})
if (isPrismaMiddlewareInvalidationEnabled) {
db.$use(async (params, next) => {
await handlePrismaInvalidation(params)
const result = await next(params)
return result
})
}
specifically:
if (isPrismaMiddlewareInvalidationEnabled) {
db.$use(async (params, next) => {
await handlePrismaInvalidation(params)
const result = await next(params)
return result
})
}
which says to use handlePrismaInvalidation
as middleware.
Now, when, say an Album is updated, an buildPrismaEntityToInvalidate
is constructed and then await cache.invalidate(entitiesToInvalidate)
manually invalidates that entity.
Whatās Next?
-
Lots of options to determine what gets cached and for how long
-
Explore cache invalidation via GraphQL mutations
-
Manual invalidation via Prisma middleware is is great when your data changes outside GraphQL
-
A local dev implementation to experiment with in dev is a snap!
-
Prime your cache via cron jobs
Download the Slides
redwood-graphql-caching.pdf (768.0 KB)