Building a Minimum Viable Stack with RedwoodJS and FaunaDB

Before diving into this I should explain the reasoning behind the way I built the project and structured the tutorial. I knew that just combing these two technologies for the first time would involve a significant degree of complexity, so I wanted to build the simplest possible application and do everything in a way that would cause the least amount of friction.

There is just a single Home route that renders a single HomePage. That page renders a single cell that fetches data with a single GraphQL query. Finally we have a single service for querying the backend and rendering an array of blog titles to the home page.

For writing to our database with create, update, or delete methods we’ll use the Fauna Shell and create 3 posts that each have a title. Redwood will not perform any write operations on the databases. All Redwood is doing is one GET request to get an array of posts.

Redwood Monorepo

  • Create Redwood App
  • Redwood Directory Structure
    • Pages
    • Cells
    • GraphQL Serverless Function
    • Schema Definition Language
    • DB
    • Services

Fauna Database

  • Create FaunaDB account
  • Create new Database
    • Collections
    • Indexes
    • Create
    • Map
    • Get
    • Match

Redwood Monorepo

Create Redwood App

To start we’ll create a new Redwood app from scratch with the Redwood CLI. If you don’t have yarn installed enter the following command:

brew install yarn

Now we’ll use yarn create redwood-app to generate the basic structure of our app.

yarn create redwood-app ./redwood-fauna

I’ve called my project redwood-fauna but feel free to select whatever name you want for your application. We’ll now cd into our new project and use yarn rw dev to start our development server.

cd redwood-fauna
yarn rw dev

Our project’s frontend is running on localhost:8910 and our backend is running on localhost:8911 ready to receive GraphQL queries.

Redwood Directory Structure

One of Redwood’s guiding philosophies is that there is power in standards, so it makes decisions for you about which technologies to use, how to organize your code into files, and how to name things.

It can be a little overwhelming to look at everything that’s already been generated for us. The first thing to pay attention to is that Redwood apps are separated into two directories:

  • api for backend
  • web for frontend
├── api
│   ├── prisma
│   │   ├── schema.prisma
│   │   └── seeds.js
│   └── src
│       ├── functions
│       │   └── graphql.js
│       ├── graphql
│       ├── lib
│       │   └── db.js
│       └── services
└── web
    ├── public
    │   ├── favicon.png
    │   ├── README.md
    │   └── robots.txt
    └── src
        ├── components
        ├── layouts
        ├── pages
            ├── FatalErrorPage
            │   └── FatalErrorPage.js
            └── NotFoundPage
                └── NotFoundPage.js
        ├── index.css
        ├── index.html
        ├── index.js
        └── Routes.js

Each side has their own path in the codebase. These are managed by Yarn workspaces. We will be talking to the Fauna client directly so we can delete the prisma directory along with the files inside it and we can delete all the code in db.js.

Pages

With our application now set up we can start creating pages. We’ll use the generate page command to create a home page and a folder to hold that page. Instead of generate we can use g to save some typing.

yarn rw g page home /

If we go to our web/src/pages directory we’ll see a HomePage directory containing this HomePage.js file:

// web/src/pages/HomePage/HomePage.js

import { Link } from '@redwoodjs/router'

const HomePage = () => {
  return (
    <>
      <h1>HomePage</h1>
      <p>Find me in "./web/src/pages/HomePage/HomePage.js"</p>
      <p>
        My default route is named "home", link to me with `
        <Link to="home">routes.home()</Link>`
      </p>
    </>
  )
}

export default HomePage

Let’s clean up our component. We’ll only have a single route for now so we can delete the Link import and routes.home(), and we’ll delete everything except a single <h1> tag.

// web/src/pages/HomePage/HomePage.js

const HomePage = () => {
  return (
    <>
      <h1>RedwoodJS+Fauna</h1>
    </>
  )
}

export default HomePage

03-new-HomePage

Cells

Cells provide a simpler and more declarative approach to data fetching. They contain the GraphQL query, loading, empty, error, and success states, each one rendering itself automatically depending on what state your cell is in.

Create a folder in web/src/components called PostsCell and inside that folder create a file called PostsCell.js with the following code:

// web/src/components/PostsCell/PostsCell.js

export const QUERY = gql`
  query POSTS {
    posts {
      data {
        title
      }
    }
  }
`

export const Loading = () => <div>Loading posts...</div>
export const Empty = () => <div>No posts yet!</div>
export const Failure = ({ error }) => <div>Error: {error.message}</div>

export const Success = ({ posts }) => {
  const {data} = posts
  return (
    <ul>
      {data.map(post => (
        <li>{post.title}</li>
      ))}
    </ul>
  )
}

We’re exporting a GraphQL query that will fetch the posts in the database. We use object destructuring to access the data object and then we map over that response data to display a list of our posts. To render our list of posts we need to import PostsCell in our HomePage.js file and return the component.

// web/src/pages/HomePage/HomePage.js

import PostsCell from 'src/components/PostsCell'

const HomePage = () => {
  return (
    <>
      <h1>RedwoodJS+Fauna</h1>
      <PostsCell />
    </>
  )
}

export default HomePage

04-PostsCell-no-posts

Schema Definition Language

In our graphql directory we’ll create a file called posts.sdl.js containing our GraphQL schema. In this file we’ll export a schema object containing our GraphQL schema definition language. It is defining a Post type which has a title that is the type of String.

Fauna automatically creates a PostPage type for pagination which has a data type that’ll contain an array with every Post. When we create our database you will need to import this schema so Fauna knows how to respond to our GraphQL queries.

// api/src/graphql/posts.sdl.js

import gql from 'graphql-tag'

export const schema = gql`
  type Post {
    title: String
  }

  type PostPage {
    data: [Post]
  }

  type Query {
    posts: PostPage
  }
`

DB

When we generated our project, db defaulted to an instance of PrismaClient. Since Prisma does not support Fauna at this time we will be using the graphql-request library to query Fauna’s GraphQL API. First make sure to add the library to your project.

yarn add graphql-request graphql

To access our FaunaDB database through the GraphQL endpoint we’ll need to set a request header containing our database key. We’ll see how to create our database key later in this tutorial.

// api/src/lib/db.js

import { GraphQLClient } from 'graphql-request'

export const request = async (query = {}) => {
  const endpoint = 'https://graphql.fauna.com/graphql'

  const graphQLClient = new GraphQLClient(endpoint, {
    headers: {
      authorization: 'Bearer <FAUNADB_KEY>'
    },
  })
  try {
    return await graphQLClient.request(query)
  } catch (error) {
    console.log(error)
    return error
  }
}

Services

In our services directory we’ll create a posts directory with a file called posts.js. Services are where Redwood centralizes all business logic. These can be used by your GraphQL API or any other place in your backend code. The posts function is querying the Fauna GraphQL endpoint and returning our posts data so it can be consumed by our PostsCell.

// api/src/services/posts/posts.js

import { request } from 'src/lib/db'
import { gql } from 'graphql-request'

export const posts = async () => {
  const query = gql`
  {
    posts {
      data {
        title
      }
    }
  }
  `

  const data = await request(query, 'https://graphql.fauna.com/graphql')

  return data['posts']
}

GraphQL Serverless Function

Files in api/src/functions are serverless functions. Most of @redwoodjs/api is for setting up the GraphQL API Redwood Apps come with by default. It happens in essentially four steps:

  1. Everything (i.e. sdl and services) is imported
  2. The services are wrapped into resolvers
  3. The sdl and resolvers are merged/stitched into a schema
  4. The ApolloServer is instantiated with said merged/stitched schema and context
// api/src/functions/graphql.js

import {
  createGraphQLHandler,
  makeMergedSchema,
  makeServices,
} from '@redwoodjs/api'

import schemas from 'src/graphql/**/*.{js,ts}'
import services from 'src/services/**/*.{js,ts}'

import { db } from 'src/lib/db'

export const handler = createGraphQLHandler({
  schema: makeMergedSchema({
    schemas,
    services: makeServices({ services }),
  }),
  db,
})

Let’s take one more look at our entire directory structure before moving on to the Fauna Shell.

├── api
│   └── src
│       ├── functions
│       │   └── graphql.js
│       ├── graphql
│       │   └── posts.sdl.js
│       ├── lib
│       │   └── db.js
│       └── services
│           └── posts
│               └── posts.js
└── web
    ├── public
    │   ├── favicon.png
    │   ├── README.md
    │   └── robots.txt
    └── src
        ├── components
        │   └── PostsCell
        │       └── PostsCell.js
        ├── layouts
        ├── pages
            ├── FatalErrorPage
            ├── HomePage
            │   └── HomePage.js
            └── NotFoundPage
        ├── index.css
        ├── index.html
        ├── index.js
        └── Routes.js

Fauna Database

Create FaunaDB account

You’ll need a FaunaDB account to follow along but it’s free for creating simple low traffic databases. You can use your email to create an account or you can use your Github or Netlify account. FaunaDB Shell does not currently support GitHub or Netlify logins so using those will add a couple extra steps when we want to authenticate with the fauna-shell.

First we will install the fauna-shell which will let us easily work with our database from the terminal. You can also go to your dashboard and use Fauna’s Web Shell.

npm install -g fauna-shell

Now we’ll login to our Fauna account so we can access a database with the shell.

fauna cloud-login

You’ll be asked to verify your email and password. If you signed up for FaunaDB using your GitHub or Netlify credentials, follow these steps, then skip the Create New Database section and continue this tutorial at the beginning of the Collections section.

Create New Database

To create your database enter the fauna create-database command and give your database a name.

fauna create-database my_db

To start the fauna shell with our new database we’ll enter the fauna shell command followed by the name of the database.

fauna shell my_db

Import Schema

Save the following code into a file called sdl.gql and import it to your database:

type Post {
  title: String
}
type Query {
  posts: [Post]
}

Collections

To test out our database we’ll create a collection with the name Post. A database’s schema is defined by its collections, which are similar to tables in other databases. After entering the command fauna shell will respond with the newly created Collection.

CreateCollection({ name: "Post" })
{
  ref: Collection("Post"),
  ts: 1597718505570000,
  history_days: 30,
  name: 'Post'
}

Create

The Create function adds a new document to a collection. Let’s create our first blog post:

Create(
  Collection("Post"),
  {
    data: {
      title: "Deno is a secure runtime for JavaScript and TypeScript"
    }
  }
)
{
  ref: Ref(Collection("Post"), "274160525025214989"),
  ts: 1597718701303000,
  data: {
    title: "Deno is a secure runtime for JavaScript and TypeScript"
  }
}

Map

We can create multiple blog posts with the Map function. We are calling Map with an array of posts and a Lambda that takes post_title as its only parameter. post_title is then used inside the Lambda to provide the title field for each new post.

Map(
  [
    "Vue.js is an open-source model–view–viewmodel JavaScript framework for building user interfaces and single-page applications",
    "NextJS is a React framework for building production grade applications that scale"
  ],
  Lambda("post_title",
    Create(
      Collection("Post"),
      {
        data: {
          title: Var("post_title")
        }
      }
    )
  )
)
[
  {
    ref: Ref(Collection("Post"), "274160642247624200"),
    ts: 1597718813080000,
    data: {
      title:
        "Vue.js is an open-source model–view–viewmodel JavaScript framework for building user interfaces and single-page applications"
    }
  },
  {
    ref: Ref(Collection("Post"), "274160642247623176"),
    ts: 1597718813080000,
    data: {
      title:
        "NextJS is a React framework for building production grade applications that scale"
    }
  }
]

Get

The Get function retrieves a single document identified by ref. We can query for a specific post by using its ID.

Get(
  Ref(
    Collection("Post"), "274160642247623176"
  )
)
{
  ref: Ref(Collection("Post"), "274160642247623176"),
  ts: 1597718813080000,
  data: {
    title:
      "NextJS is a React framework for building production grade applications that scale"
  }
}

Indexes

Now we’ll create an index for retrieving all the posts in our collection.

CreateIndex({
  name: "posts",
  source: Collection("Post")
})
{
  ref: Index("posts"),
  ts: 1597719006320000,
  active: true,
  serialized: true,
  name: "posts",
  source: Collection("Post"),
  partitions: 8
}

Match

Index returns a reference to an index which Match accepts and uses to construct a set. Paginate takes the output from Match and returns a Page of results fetched from Fauna. Here we are returning an array of references.

Paginate(
  Match(
    Index("posts")
  )
)
{
  data: [
    Ref(Collection("Post"), "274160525025214989"),
    Ref(Collection("Post"), "274160642247623176"),
    Ref(Collection("Post"), "274160642247624200")
  ]
}

Lambda

We can get an array of references to our posts, but what if we wanted an array of the actual data contained in the reference? We can Map over the array just like we would in any other programming language.

Map(
  Paginate(
    Match(
      Index("posts")
    )
  ),
  Lambda(
    'postRef', Get(Var('postRef'))
  )
)
{
  data: [
    {
      ref: Ref(Collection("Post"), "274160525025214989"),
      ts: 1597718701303000,
      data: {
        title: "Deno is a secure runtime for JavaScript and TypeScript"
      }
    },
    {
      ref: Ref(Collection("Post"), "274160642247623176"),
      ts: 1597718813080000,
      data: {
        title:
          "NextJS is a React framework for building production grade applications that scale"
      }
    },
    {
      ref: Ref(Collection("Post"), "274160642247624200"),
      ts: 1597718813080000,
      data: {
        title:
          "Vue.js is an open-source model–view–viewmodel JavaScript framework for building user interfaces and single-page applications"
      }
    }
  ]
}

So at this point we have our Redwood app set up with just a single:

  • Page - HomePage.js
  • Cell - PostsCell.js
  • Function - graphql.js
  • SDL - posts.sdl.js
  • Lib - db.js
  • Service - posts.js

We used FQL functions in the Fauna Shell to create a database and seed it with data. FQL functions included:

  • CreateCollection - Create a collection
  • Create - Create a document in a collection
  • Map - Applies a function to all array items
  • Lambda - Executes an anonymous function
  • Get - Retrieves the document for the specified reference
  • CreateIndex - Create an index
  • Match - Returns the set of items that match search terms
  • Paginate - Takes a Set or Ref, and returns a page of results

If we return to the home page we’ll see our PostsCell is fetching the list of posts from our database.

And we can also go to our GraphiQL playground on localhost:8911/graphql.

RedwoodJS is querying the FaunaDB GraphQL API with our posts service on the backend and fetching that data with our PostsCell on the frontend. If we wanted to extend this further we could add mutations to our schema definition language and implement full CRUD capabilities through our GraphQL client.

2 Likes

Hi.

Based on the stacktrace it looks like we did not properly initialize our db variable with faunadb.Client .

I don’t think your error is that db is undefined, it is that db.posts is – so cannot call findMany on db.posts.

But, fauna queries are very different that Prisma ones.

The FQL would be something like (if you created an all_posts Index as well):

// api/src/services/posts/posts.js

import { query as q } from "faunadb"
import { db } from 'src/lib/db'

export const posts = () => {
  return db.query(q.Map(
    q.Paginate(q.Match(q.Index("all_posts"))),
    q.Lambda("X", q.Get(q.Var("X"))))
}

Note: not tested.

You’ll get back JSON like:

{ 
  after: "the ref of the item the data comes after",
  data: [// array of your posts data]
}

So that will probably have to be mapped to your Post SDL.

I only spent a little time with Fauna a few months ago and never became that comfortable with FQL – especially when I tried relations/associations. I don’t come from the map Mongo, Hadoops or other database worlds so the maps and lambda syntax hasn’t quite sunk in yet.

I imagine some helpers to map the FQL response to SDL would be really helpful.

Good luck. Excited to see where this goes.

1 Like

Thanks David, this is definitely tricky cause there’s a lot of room for error in just the set up process of getting the database going, making sure your key is in the right place and all that.

If anyone’s got the time to run through the tutorial as it is right now and let me know if you’re running into these errors or different errors I would be extremely grateful! I’ll be posting this later today on the Fauna community forum as well to get some more eyes on it.

Saw that your original message had

{
  "errors": [
    {
      "message": "Cannot return null for non-nullable field Query.posts.",
      "locations": [
        {
          "line": 2,
          "column": 3
        }
      ],
      "path": [
        "posts"
      ],
      "extensions": {
        "code": "INTERNAL_SERVER_ERROR",
        "exception": {
          "stacktrace": [
            "Error: Cannot return null for non-nullable field Query.posts.",

This is because you’ll have to map the response json from the FQL query to the Post structure expected by its SDL.

Something like (not exact since this data comes for the Fauna shell):

  after: [Ref(Collection("posts"), "999969157773230611")],
  data: [
    {
      ref: Ref(Collection("posts"), "999969156788617747"),
      ts: 1582086674420000,
      data: {        
        title:
          "Augmented Reality and Virtual Reality in Sports and Entertainment Market Growth Prospects, Key Vendors, Future Scenario - openPR",

to


```js
{ posts: 
  [
    title: "Building a Minimum Viable Stack with RedwoodJS and FaunaDB",
  ] 
}

So - map the data as a post and extract data.title to build the info needed to match the Post SDL.

Perfect, thank you for that example I get exactly what you’re saying now. These abstractions are so fluid sometimes it really breaks my brain trying to keep the query language and schema language separate in my head.

1 Like

I hear ya. All the little parts have to fit perfectly together.

Once they do, then it’s pretty cool.

I have not looked into using GraphQL with Fauna, but maybe then at least there isn’t that much shift from gql to fql.

You could then use something like

and connect to your Fauna database that way and query posts in a more “familiar way”.

I wonder if these are the kinds of problems they’re looking to solve with their new library of React hooks, useFauna. I find this to be a really interesting direction to go, and makes me think of Swyx’s recent Growing a Meta-Language talk.

Hook Description
useFaunaClient Instantiate a Fauna client passing this hook an admin key
useGetAll Get all the Collections, Databases, Documents, Functions or Indexes
useGet Get an individual Collection, Database, Document, Function or Index
useCreate Create a Collection, Database, Document or Index
useDelete Delete a Collection, Database, Document or Index
useUpdate Update a Collection, Database, Document, Function, Index or Role

Using graphql-request worked really well. Just needed to add an authorization header for the token.

import { GraphQLClient, gql } from 'graphql-request'

async function main() {
  const endpoint = 'https://graphql.fauna.com/graphql'

  const graphQLClient = new GraphQLClient(endpoint, {
    headers: {
      authorization: 'Bearer <secret>'
    },
  })

const query = gql`
   {
    posts {
      data {
        title
      }
    }
  }
`

  const data = await graphQLClient.request(query)
  console.log(JSON.stringify(data, undefined, 2))
}

main().catch((error) => console.error(error))

And we get this in the console:

{
  "posts": {
    "data": [
      {
        "title": "Deno is a secure runtime for JavaScript and TypeScript"
      },
      {
        "title": "NextJS is a React framework for building production grade applications that scale"
      },
      {
        "title": "YugabyteDB is an open source, high-performance, distributed SQL database for global, internet-scale apps"
      }
    ]
  }
}

Sweet.

Is there any particular reason we needed the graphql-requests library to do this even though Redwood has Apollo Client built in? I’ve never used Apollo by itself, but isn’t this the exact kind of stuff it’s made for?

1 Like

Are you doing this in a service or a cell or … something else (I see main)?

I typically do:

  1. new GraphQLClient with the needed auth
  2. use that client in a service, here to get all posts
  3. “as long as” the data returned out of that service (posts) matches the expected Post SDL all good
  4. Otherwise have to map the response to the SDL. Not sure best way to reformat, but i just manually map as needed…

I have wondered about this, too, aka “remote schemas” – but I don’t know about them – or whatnot, but the key issues that require a new endpoint is:

  • a different endpoint url
  • different auth

Apollo Client

That is available on just web side I believe and probably want to do this on api side (and shield your Fauna token).

That is available on just web side I believe and probably want to do this on api side (and shield your Fauna token).

Yeah that makes sense.

Right now it’s just a function sitting on my HomePage.js while I made sure it console.logged the data, was hoping you would have some tips about what to actually do with the data now that I’ve got it :sweat_smile:

I’ll try returning it out of the posts service and we’ll see how that goes.

So, here’s what I do:

Client

This would be your FaunaDB gql client.

// /api/src/lib/hasuraClient.js

import { GraphQLClient } from 'graphql-request'

export const request = async (
  query = {},
  domain = process.env.HASURA_DOMAIN
) => {
  const endpoint = `https://${domain}/v1/graphql`

  const graphQLClient = new GraphQLClient(endpoint, {
    headers: {
      'x-hasura-admin-secret': process.env.HASURA_KEY,
    },
  })

  try {
    return await graphQLClient.request(query)
  } catch (error) {
    console.log(error)
    return error
  }
}

Service

// src/services/stories/stories.js
import { requireAuth } from 'src/lib/auth.js'
import { request } from 'src/lib/hasuraClient'

export const stories = async () => {
  requireAuth()

  const query = `
  {
    stories {
      author
      createdAt
      imageUrl
      site
      summary
      title
      url
    }
  }
 `
  const data = await request(query, process.env.HASURA_DOMAIN)

  return data['stories']
}

SDL

import gql from 'graphql-tag'

export const schema = gql`
  type Story {
    author: String
    createdAt: String
    imageUrl: String
    site: String
    summary: String
    title: String
    url: String
  }

  type Query {
    stories: [Story!]
  }
`

And then just query stories as you would normally from any Cell on web using gql calling your RW api.

4 Likes

Article is live on the Fauna site:

You can also see it on my dev.to account:

Fauna article has a couple typos in the code (I let them know, they should be fixed in the near future) so if you’re gonna actually build out the project this is probably still the best resource or you can just clone the project from my github.

3 Likes