Prerender proposal

I’ve been thinking about what the developer experience should be for enabling route-based prerender in a Redwood app and here’s a proposal:

Let’s say your home page is a marketing page that doesn’t take any parameters and should just be static. All you’ll have to do is add prerender to the route:

// web/src/Routes.js
<Route path="/" page={HomePage} name="home" prerender />

And that’s it! It will be rendered out at build-time and delivered as a static file. Now, if you have route parameters, it gets more interesting. You’ll still specify prerender in the route:

// web/src/Routes.js
<Route path="/todo/{id}" page={TodoPage} name="todo" prerender />

But now you need to get all possible values of id at build time. There are two ways you might want to do this. Perhaps you have a static list of IDs that don’t change very often (you would be happy changing a code file to update it). In that case, you could create a file on the web side at src/prerender.js:

// web/src/prerender.js
export const todo = [ {id: 1}, {id: 2}, {id: 3} ]

The idea here is that you can export an array-of-objects with the same name as the route you’ve requested to be prerendered and Redwood will grab the possible route params from there. You could also use this method to read them off disk from a JSON file or the like.

This is cool, but often you’ll want to get a dynamic list of route params from your database at CI build-time. In this case, it would be handy for the code to live on the api side so you can have access to whatever data access machinery normally lives there. But we don’t want to make you have to do complicated data fetching, so we could allow you to instead create a src/build/web/prerender.js file on the api side:

// api/src/build/web/prerender.js
import { todos } from 'src/services/todos'

export const todo = todos

This example is in the context of the example-todo app, and in it we can import the todos function from the todos service (which is exactly why services are written like this!). It will simply return an array of Todo objects, each of which contains an id parameter. Redwood will then orchestrate how that data is consumed by the prerender build process (maybe via a build-time-only GraphQL service, or maybe something else).

In either case, with only a few lines, you can enable prerender on a per-route basis and provide the necessary route params so Redwood can iterate over the possible pages and get them all rendered for you at build time.

You might wonder why we don’t simplify this and JUST have the api side file. The answer is two-fold:

  1. If you don’t have an api side at all (which is fine), then you’d have no way to prerender!
  2. If you don’t have an api side prerender file, then we can simplify the build process and not have to spin up any database access there, which will make the build happen faster and easier, so it ends up being an optimization to use the web side prerender file.

Let me know what you think!


This is very exciting Tom!

For the app I’m building the third option, with the api-side prerender.js, is going to be very useful! :rocket:

Sounds really great, I think the third option would be most useful for my use cases.

Just a quick question, appreciate this might be out of the scope at this time, but does the pre-render proposal allow for any form of ‘Incremental Builds’?

I am sure you guys are aware of the same concept in Next.js,

Obviously it isn’t ideal if you just, for example, make a minor text change and this requires a whole rebuild.

1 Like

Oooh, I love the way that you’re tying the router and the “prerender.js” file together


I thought I’d share my developer experience implementing an app that did many page preenders (30,000+ pages) during build and deploy that was, shall we say … sub-optimal.

Hope that my experience can inform some considerations when designing and implementing prerender in RedwoodJS – which will be a great and – I think – oft used feature.

Page Rendering circa 2017

:steam_locomotive: First, we travel back to Fall 2016, Winter 2017. I jumped aboard the JAMStack train.

The app I built used Middleman – a static-site generator – since I came from the Ruby world and I we had already been using Contentful to store “research data” (company & people profiles, blog posts, “market maps”). So we thought ahead, Add in Algolia to search, Auth0 to authenticate (and authorize via roles) and Netlify to build and deploy … and enforce auth.

We were an early user of Netlify’s role-based redirects where cookies stored the JWT and said if a user has access to the market map area, etc.

Contentful didn’t yet have GraphQL support, its Ruby sdk was still in early stages. etc, the its Delivery API could access the data I needed to generate pages. Also, Netlify did not have plugins to help with prebuild tasks ot the build plugin cache to help store file data between builds. I also wasn’t going to check in 30,000 pages into GitHub to keep in sync. Plus, what would check in? A local build? Netlify?

First Approach

:crossed_fingers: My first approach was basically (little simplified):

  • During Netlify build
  • Use a rake task in the build command
  • to fetch all data needed from Contentful
  • store in yaml files per type (companies, posts, people, etc). Maybe 3-4k entries.
  • fetch article data from a private microservice (25k+)
  • build app
  • Wait for Netlify to send up lots and lots of possible changes to CDN

At some point, build times got to be over 3 hours (sometimes hitting 6 hrs if lets say a layout chamges at all 40k+ pages changed), memory exploded, pushing to the CDN could fail on timeouts due to number of files … we got our own build instance and only built in the morning and evening.

Something had to change.

Second Approach later in2017

:thinking: Where can I optimize? Yaml generation.


  • some optimization (gzipping and archiving yaml to S3 and
  • only fetching data from Contentful and articles from a “last” date) and
  • optimizing the page rendering for Middleman

I got the builds down to 60-90 mins and no memory explosion.

This involved lots of pre-processing the pages w/ front matter instead of loading the massive datasets.

Again as a rake task, but now could be done w/ Netlify plugins.

Third Approach early 2018

:balance_scale: Can we scale to more articles?

We were adding articles at a rate of 1-2k per week, so w/in a few months 30k would become 60k. Would become 90k. FYI - there are ~ 300k now.

  • No longer prerender the article pages
  • Embedded a Vue.js app to fetch from the Article API itself (validated uer’s JWT and acces etc etc)
  • So now fewer pages and API calls

and build went down to < 30 mins.

Fourth approach early 2019

:boom: Scrap it all and make React app.

  • One time full data load
  • Contentful webhooks send changes (CRUD) to a microservice that sends to GetStream collections
  • Microservice sends Articles to GetStream
  • Other data (charts, structured JSON data) stored on S3 and Netlify lambda functions fetch
  • App builds only on feature changes and takes ~2-3mins :tada:

2017 Problems

Here are some of my problems with larger scale prerendering (again on not so optimized systems but alas the concepts hold true for design consideration).

Hope some of this can inform prerendering with RW.

:page_with_curl: Pagination

Whatever fetches the data to be rendered as pages may either have to render in batches (100-1000 at a time) or be able to fetch the entire dataset.

I haven’t tried a RW graphql example with pagination to see what that might look like.

:brain: Memory

If the entire dataset is returned, is this a memory concern?

When Middleman had to load all the yaml files, it was taking several GB of RAM and as I said, I had to get a dedicated Netlify build box. We were on Enterprise, so everyone there was happy to help. But, no.

⊧⊧⊧ Multple Models in page

How would multiple models in one page be handled – assuming the 2nd model is not related to the first?

For example,

// api/src/build/web/prerender.js

import { todos } from 'src/services/todos'

export const todo = todos

I want to shows todos and maybe also on a page show the map of the todo lat/lon? (guessing here). And lets say that makes a MapCell that calls a 3rd part api with lat/lon to fetch (city, state, zip, country).

Would a MapCell component still render if passed the lat, lon? Even if the Cell makes an API or GraphQL call?

In 2017 this meant I had to have all the data for all models in memory so the page could access wahtever it needed.

:alarm_clock: Timeouts / Connection Resets

During pagination of the Contentful API, I relatively frequently hit connection resets or timeouts due to network issues.

May need to implement a exponential backoff and retry in pagination calls.

:whale: Prerender data fetch fails = Incomplete Site?

Until I handled timeouts or retries more gracefully, build/deploy would not necessarily fail … even worse I’d only have a subset of data or old data and the site would reflect that.

:money_with_wings: Number/Cost/Limits of API Calls

If the prerender data isn’t cached (say as a Netlify build plugin or elsewhere) then every build will make api calls. If that is to the Prisma-backed database, maybe not a problem.

But, if a third-party/external api is being used, there rate limits, and calls per-day to consider. This can also have monetary considerations.

Other Thoughts

  • The term “prerender” has some name recognition with a “prerendering” service for SEO. Will this be confusing? See:

  • There comes a limit when prerendering isn’t suited and the pages should be dynamic. Maybe it is 100 pages or 1000 or it depends on how the prerender data is fetched (and from where). The developer needs to be sensible if to use or not. But that’s the nature of things.

  • Would Auth work the same way? Would it be possible to also use Netlify auth/role based redirects to enforce? Not sure one would want to, but just thinking.



So, that’s my experience of “when prerendering goes bad”.

  • Pagination

  • Number/Cost of API calls

  • Build time

  • Memory

  • Connection reset/timeouts

  • Fails, incomplete sites

Confident it won’t go that way in RW.


Read Pre-rendering with react-snap & Redwood and that raised another consideration that I forgot:

  • How to handle routes, pages nested in <Private>

Assumption might be that such auth-backed pages cannot be prerendered and that the prerender is not allowed on a Route if surrounded by <Private>.

1 Like

Great proposal! Some thoughts:

  1. Build-time prerendering will use the data for the correct environment I presume? Production would need production data, a staging environment its own staging data, local dev builds local data.
  2. Sometimes you’d like to update the prerendered pages without deploying / building. For example, for a highly dynamic site you might want to update the prerendering every hour. Could there be a remote rake-like task for this purpose? Or does this break the JAMstack concept?

Hi @nickg!

  1. Yes, correct. Whatever DB connection you are using specific to that environment will be the data used. Same for integrations to other services — most likely environment variable controlled (pros/cons).
  2. Absolutely possible. You could set up anything from GH Action to the new
1 Like

Just a quick question, appreciate this might be out of the scope at this time, but does the pre-render proposal allow for any form of ‘Incremental Builds’?

I think you should be able to build only certain routes with a build parameter (with an array or glob). This way, for example if you have a CMS (in your Redwood app itself of course) you could only build the page whose data is recently saved / published as a trigger.

Next.js’s incremental static regeneration is also a good idea, but expects highly dynamic data. Still, I highly welcome it.