How-to deploy Lambda API with path parameters on AWS and Local

I am looking for guidance on how to use Redwood to deploy an API-only application on AWS Lambda, while still supporting local development.

I have a basic example working using the documented example for deploying with serverless.com, but the provided examples only speak to simple paths that don’t include path parameters such as /things/{id}.

API defined in serverless.yml:

events:
  - httpApi:
      authorizer:
        name: myApiKeyAuthorizer
      path: /things
      method: GET
  - httpApi:
      authorizer:
        name: myApiKeyAuthorizer
      path: /things/{id}
      method: GET

AWS documentation/tutorials such as: Tutorial: Build a CRUD API with Lambda and DynamoDB - Amazon API Gateway
recommend using the 2.0 payload/event format that includes an ‘event.routeKey’, whose values end up looking like ‘GET /things/{id}’.

Based on this, here is my basic function making use of event.routeKey to identify what underlying service function to call.

export const handler = async (event, context) => {
  logger.info('Invoked things function')
  let body
  let statusCode = 200
  const headers = { 'Content-Type': 'application/json' }

  try {
    switch (event.routeKey) {
      case 'GET /things/{id}':
        body = await thing({ id: event.pathParameters.id })
        break
      case 'GET /things':
        body = await things()
        break
      default:
        throw new Error(`Unsupported route: '${event.routeKey}'`)
    }
  } catch (err) {
    statusCode = 400
    body = err.message
  } finally {
    body = JSON.stringify(body)
  }

  return {
    statusCode,
    headers,
    body,
  }
}

The problem is, while this works when the function is deployed on AWS, when invoking the API locally in the dev environment, the event payload does not include the routeKey (or pathParameters).

Event payload on localhost

{
  "event": {
    "httpMethod": "GET",
    "headers": {
      "host": "localhost:8911",
      "user-agent": "curl/7.79.1",
      "accept": "*/*"
    },
    "path": "/things/1",
    "queryStringParameters": {},
    "requestContext": {
      "requestId": "req-1",
      "identity": {
        "sourceIp": "::ffff:127.0.0.1"
      }
    },
    "body": "",
    "isBase64Encoded": false
  }
}

Note that this payload is also lacking the ‘pathParameters’ object, so extracting the ‘id’ using ‘event.pathParameters.id’ is also not possible.

Is there a place in Redwood to configure these API route paths, such that the API system will map the request to a route key and populate it along with the path parameters in the event payload?

1 Like

@sdsilas I highly recommend you use the built-in Serverless Framework setup for this. Just remove the config for web deploy to Cloudfront:

You didn’t mention bundling your functions after building, which is challenging to roll your own. Just let rw deploy serverless do it all for you!

Also, here’s the config you were looking for:

Thanks for the response. To clarify, I am using the built-in serverless framework in Redwood to manage the deploy to AWS, and that part is working great! For example, I can make a curl request like this and get the right response:

$ curl -XGET "https://******.execute-api.us-west-2.amazonaws.com/things/12"
{"id":"12", "name":"Thing 12"}

The event payload that comes to the lambda when hosted on AWS looks like this:

{
  "event": {
    "version": "2.0",
    "routeKey": "GET /things/{id}",  <<---- Templatized route key
    "rawPath": "/things/12",
    "rawQueryString": "",
    "headers": {
      "accept": "*/*",
      "content-length": "0",
      "host": "********.execute-api.us-west-2.amazonaws.com"
    },
    "requestContext": {
      ...
    },
    "pathParameters": {
      "id": "12" <<----- Path params extracted based on templatized path
    },
    "isBase64Encoded": false
  }
}

AWS is able to do this b/c I defined the routes in the serverless.yml config before deploying with Redwood, so it becomes part of the AWS gateway config.

The problem is, if I try to make the same request to the same API (same code) running on my localhost:8911 with yarn rw dev, then it fails because the lambda event payload provided by the redwood API framework lacks the routeKey and pathParameters fields.

$ curl "http://localhost:8911/things/12"
Unsupported route: 'undefined'

Localhost lambda event payload:

{
  "event": {
    "httpMethod": "GET", <<---- No route key, just method
    "headers": {
      "host": "localhost:8911",
      "user-agent": "curl/7.79.1",
      "accept": "*/*"
    },
    "path": "/things/12",
    "queryStringParameters": {},
    "requestContext": {
      "requestId": "req-1",
      "identity": {
        "sourceIp": "::ffff:127.0.0.1"
      }
    },
    "body": "",      <<---- No path params
    "isBase64Encoded": false
  }
}

I can’t find where to tell Redwood about my API paths that include template params such as /{id}, so that the Redwood API framework can match a request to a route, extract path params, etc. Instead, it seems like Redwood is expecting that each lambda function only handles one type of request, and that the request cannot include template path params? Would love to be wrong!

(The redwood.toml docs you sent doesn’t seem to speak to this, unless I am missing it?)

Thanks!

AH, got it. And definitely the right choice for this!

Here’s my Redwood project that I’m deploying to AWS with Serverless for our deploy-target CI — take a look at this redwood.toml specifically:

This is boilerplate toml for new projects, but perhaps your toml isn’t using the apiUrl = "${API_URL:/api}" syntax?

Look around the rest of the project as well. And feel free to clone and try it locally. Things are working. (Well, except for a CORS error on deployment… using dbAuth and need to iron that out but doing it after launch week.)

tl;dr
If you’ve used rw setup deploy serverless and followed the docs, this should JustWork™

Keep me posted otherwise!

Hmm, I do have the same API_URL syntax:

[web]
  title = "Redwood App"
  port = 8910
  apiUrl = "${API_URL:/api}"       # Set API_URL in production to the Serverless deploy endpoint of your api service, see https://redwoodjs.com/docs/deploy/serverless-deploy
  includeEnvironmentVariables = [] # any ENV vars that should be available to the web side, see https://redwoodjs.com/docs/environment-variables#web
[api]
  port = 8911
[browser]
  open = true

But, doesn’t that just tell the web project where to find the API? In my use case, I am not deploying the web project at all.

I did deploy with rw setup deploy serverless, and I can get the API working locally as long as I don’t try to use template path parameters. (That’s the main issue. I am trying to deploy a traditional crud REST API.)

It feels like I need to tell Redwood about the shape of the API somewhere: what the URL patterns are, which functions get invoked for each template pattern, etc. Basically, the type of configuration you would do to setup an API Gateway from scratch on AWS.

For example, the whole API might be something like:

POST /things
GET /things
GET /things/{id}
PUT /things/{id}
DELETE /things/{id}

But all I’ve told Redwood is to generate a lambda ‘things’ function

yarn rw g function things

So at most I could implement this part of the API, by switching on the HTTP method in the implementation of the lambda.

POST /things
GET /things

I wasn’t picking up that you’re working with a custom function outside the GraphQL API endpoint. Got it. :white_check_mark:

I’ve not done this before and this might be the first I’ve seen someone try to do this on AWS Lambdas via Serverless. But I’m sure it can be done.

Local Dev
Redwood’s local dev server does build, but it’s not a bundled AWS Lamda function, it’s a Fastify server process. So I wonder if you could leverage something here — App Configuration: redwood.toml | RedwoodJS Docs

It’s very close (effectively just an additional watcher) to the production server when you run yarn rw serve api. So it might be beneficially to play around with that command locally to see if you can find a way to get things working because you can pass options: Command Line Interface | RedwoodJS Docs

Or maybe just get clever with Env Vars you create to manage context.

Production
Just to be double clear, this all works correctly when deployed, correct? You just haven’t figured out how to be able to both dev locally and

Correct.

I’ll dig in under the hood a bit and see if I can’t get something to work.

Thanks for your help and congrats and good luck on the launch! Really liking the framework.

1 Like

How goes it?

Good! But ran into some inflexibility on the part of Fastify on one side and AWS on the other (not Redwood).

You can see the core changes to support it here: Support CRUD-style REST requests on Fastify · codersmith/redwood@ebed7fb · GitHub
(Disclaimer, I’m not that comfortable with Typescript, so take this commit as “inspiration”…)

This allows you to define a basic API route spec in server.routes.json, configured via redwood.toml:

# ...
[api]
  port = 8911
  debugPort = 18911
  routes = "./api/server.routes.json"

server.routes.json:

[{
    "method": ["POST", "GET"],
    "url": "/things",
    "handlerFuncFile": "things.js"
  },
  {
    "method": "GET",
    "url": "/things/:id",
    "handlerFuncFile": "things.js"
  }
]

The routes get loaded at startup:

Then you can make API calls such as:

$ curl http://localhost:8911/things
[{"id":"62493fe3d20139a7a4996e7b","name":"Thing 1"},
{"id":"6249401ad20139a7a4996e7c","name":"Thing 2"}]

and the parametric form, specifying the ID param as :id

$ curl http://localhost:8911/things/6249401ad20139a7a4996e7c
{"id":"6249401ad20139a7a4996e7c","name":"Thing 2"}

and create

$ curl http://localhost:8911/things -XPOST --data '{"name": "Thing 3"}'
{"id":"624dc43f840b5567758a3d59","name":"Thing 3"}

Because the route template url (resourceId) and path params are being included in the event object now, then the corresponding project implementation can check to see which type of request was made:

// things.js
export const handler = async (event, context) => {
  let body
  let statusCode = 200
  const headers = { 'Content-Type': 'application/json' }

  try {
    switch (event.requestContext.resourceId) {
      case 'GET /things/:id':
        body = await thing({ id: event.pathParameters.id })
        break
      case 'GET /things':
        body = await things()
        break
      case 'POST /things':
        body = await createThing({
          input: { name: JSON.parse(event.body).name },
        })
        break
      default:
        throw new Error(
          `Unsupported route: '${event.requestContext.resourceId}'`
        )
    }
  } catch (err) {
    statusCode = 400
    body = err.message
  } finally {
    body = JSON.stringify(body)
  }

  return {
    statusCode,
    headers,
    body,
  }
}

Works great! :white_check_mark:

But the main problem is, Fastify seems to only use the /path/:param form of path parameters, and AWS API Gateways uses the /path/{param} version, and neither are configurable or able to deal with the alternate format. :person_facepalming:

So, it seems like some sort of deploy-time processing would need to take place in the project function definition to convert the Fastify-style paths into AWS-style, or perhaps a lookup table that will return the right style of path based on current runtime env.

(Also, the ‘yarn rwfw’ dev integration feels like witchcraft :exploding_head: It just shouldn’t be that easy, and yet it was. Really nice work!)

1 Like

(Also, the ‘yarn rwfw’ dev integration feels like witchcraft :exploding_head: It just shouldn’t be that easy, and yet it was. Really nice work!)

:heart: I can take about 1% of the credit for this other than helping to drive the vision for making it as easy as possible to contribute. Looks like it’s working!!

[api]
  port = 8911
  debugPort = 18911
  routes = "./api/server.routes.json"

Idea! Because you can use Env Vars in the toml, what if you set the path via something like:

routes = "${SERVER_ROUTES_JSON}"

…which could then be distinct for local dev an deployment. E.g.

// local api/fastify.server.routes.json:
[{
    "method": ["POST", "GET"],
    "url": "/things",
    "handlerFuncFile": "things.js"
  },
  {
    "method": "GET",
    "url": "/things/:id",
    "handlerFuncFile": "things.js"
  }
]

// deploy api/aws.server.routes.json

[{
    "method": ["POST", "GET"],
    "url": "/things",
    "handlerFuncFile": "things.js"
  },
  {
    "method": "GET",
    "url": "/things/{id}",
    "handlerFuncFile": "things.js"
  }
]

It’s a start. If maintenance and keeping in sync becomes a pain, you could write a build step that would take the Fastify syntax and automatically generate AWS Lambdas file with syntax.

Unfortunately, the server.routes.json file is only used to load Fastify routes, b/c the AWS routes need to be defined in serverless.yml.

Routes being registered with Fastify from server.routes.json (full source ebed7fb@GitHub)

// withFunction.js

routes?.forEach((route) => {
  const fullUrl = `${apiRootPath}${route.url.replace(/^\//, '')}`
  const { handler } = require(funcFilesMap[route.handlerFuncFile])
  app.route({
    method: route.method,
    url: fullUrl,
    handler: lambdaRequestHandler,
})

So there is already some route spec duplication to maintain between serverless.yml and server.routes.json.

Where the problem comes in is within the implementation of the handler, assuming it is one handler switching on multiple routes. Because then it’s less of a configuration/environment change, you’re actually having to tweak source code at deploy time.

// things.js handler

export const handler = async (event, context) => {
  logger.info('Invoked things function')
  let body
  let statusCode = 200
  const headers = { 'Content-Type': 'application/json' }

  try {
    switch (event.requestContext.resourceId) {
      case 'GET /things/:id':
        body = await thing({ id: event.pathParameters.id })
        break
      case 'GET /things':
        body = await things()
        break
      case 'POST /things':
        body = await createThing({
          input: { name: JSON.parse(event.body).name },
        })
        break
      default:
        throw new Error(
          `Unsupported route: '${event.requestContext.resourceId}'`
        )
    }
   // ...

But maybe it’s more easily solved by just not trying to switch on the route paths at all, and instead using a separate handler function file for each supported route?

[{
    "method": "POST",
    "url": "/things",
    "handlerFuncFile": "createThing.js"
  },
  {
    "method": "GET",
    "url": "/things",
    "handlerFuncFile": "getThings.js"
  },
  {
    "method": "GET",
    "url": "/things/:id",
    "handlerFuncFile": "getThing.js"
  }
]