AWS S3 Post Policy for Validated Image/File hosting

Hi all,
Here’s a clunky run down of a similarly clunky setup I’ve kinda got working to host images/files with S3 using post-policy. Which allows you to do some simple backend validation using only S3 - no lambdas, no image processing in your api.

This post is 1 part showcase of s3 file storage with presigned post-policy and 1 part cry for help.

I’m going to assume you have an AWS account already, but otherwise I hope to make this exhaustive.

Upload Image full stack flow description.

  • User submits the form with a file field. In my case - I set it to auto submit onChange.
  • The client sends a uploadImage GQL request to the api which procures/builds and returns a set of signed post request headers. These include policy which can limit file types, file sizes, object name/path (aka key), an expiry and a few other parameters.
  • The client takes the returned headers including the signature, and submits them with their file to your S3 Bucket.
  • AWS S3 checks if the provided policy header match the signature and if so, checks the file adheres to the policy and if so, uploads them.
  • Additionally, you’d probably want the client to tell your backend that the upload to AWS was successful/unsuccessful so you can conditionally handle the your images entity in the db.

1. Architecture/Set Up

IAM User

Set up a dedicated IAM user. I used AWS’ predefined AmazonS3FullAccess Permission Policy. For the purposes of this run through - lets call it myIamUserName
Save its credentials in your environment variables.

S3 Bucket

Set up an S3 bucket for your images/files. For the purposes of this run through, lets call it myBucketName

Unblock public access as below (no boxes checked).

Set your CORs policy to allow POST traffic from your domain either a * wildcard for all domains (good for dev); or your production domain/s.

[
    {
        "AllowedHeaders": [
            "*"
        ],
        "AllowedMethods": [
            "POST",
            "PUT"
        ],
        "AllowedOrigins": [
            "*",
            "www.mydomain.com"
        ],
        "ExposeHeaders": [],
        "MaxAgeSeconds": 3000
    }
]

Also, you need to set up your bucket policy - I’m using wildcards for locations, but you could get restrictive if you were so inclined:

    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowAllGet",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::myBucketName/*"
        },
        {
            "Sid": "AllowPostIfAuth",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::123456789012:user/myIamUserName"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::myBucketName/*"
        }
    ]
}

Note I’m using 123456789012 as a standin for my account id.
Full discretion: I’m not not tooo sure if the AllowPostIfAuth is required. or if the SID names I’ve chosen are ok. - My point is that this part is probably not perfect

Finally you need to make sure ACL is enabled - I got stuck with this off for a while.

2. Redwood Implementation

Dependencies/Installations

For the aws sdk apis that I used, run this command:
yarn workspace api add @aws-sdk/s3-presigned-post
and
yarn workspace api add @aws-sdk/client-s3

DB/Schema

I added a model for images to my primsa.schema as below:

model User {
//...
  profilePicture                  Image?           @relation(fields: [profilePictureId], references: [id], name: "profilePicture")
  profilePictureId                Int?
  Images                          Image[]          @relation(name: "createdBy")
//...
}

model Image {
  id           Int      @id @default(autoincrement())
  url          String
  createdAt    DateTime @default(now())
  createdBy    User     @relation(fields: [createdById], references: [id], name: "createdBy")
  createdById  Int
  userProfiles User[]   @relation(name: "profilePicture") // should only be one long at max. This is only for the user's profile picture
}

Front End Form

The front end form requests a signed post policy, and submits it with the file to the s3 bucket.

//Upload image form 
import { useRef } from 'react'

import { gql, useMutation } from '@apollo/client'

import { toast } from '@redwoodjs/web/dist/toast'

const GQL_MUTATION = gql`
  mutation UploadImage($input: UploadImageInput!) {
    uploadImage(input: $input) {
      url
      fields
    }
  }
`

const UploadProfileImageForm = ({ runOnSuccess }) => {
  const fileInputRef = useRef(null)

  const [getS3ParamsAndUpload, { loading, error }] = useMutation(GQL_MUTATION, {
    onCompleted: async (data) => {
      const { url, fields } = data.uploadImage
      const file = fileInputRef.current.files[0]

      console.log('Fields', JSON.stringify(fields))
      const formData = new FormData()
      Object.entries(fields).forEach(([key, value]) => {
        formData.append(key.replace(/_/g, '-'), value as string)
      })
      formData.append('file', file)
      try {
        const response = await fetch(url, {
          method: 'POST',
          body: formData,
        })

        if (response.ok) {
          toast.success('Image uploaded successfully')
          // runOnSuccess()
        } else {
          toast.error('Failed to upload image')
        }
      } catch (error) {
        toast.error('An error occurred during upload')
      }
    },
    onError: (error) => {
      toast.error(error.message)
    },
  })

  const handleFileChange = (e) => {
    // Trigger the form submission when file is selected
    onSubmit(e)
  }

  const handleDrop = (e) => {
    // Prevent default behavior when a file is dropped
    e.preventDefault()
    // Trigger the form submission when file is dropped
    onSubmit(e)
  }

  const onSubmit = async (e) => {
    e.preventDefault() // Prevent default form submission behavior
    const file = fileInputRef.current.files[0]

    if (!file && fileInputRef.current.value) {
      toast.error('Please select a file')
      return
    }

    if (file.size > 600000) {
      toast.error('Image must be less than 600kb')
      return
    }

    const fileTypes = ['image/jpeg', 'image/png', 'image/webp', 'image/bmp']
    if (!fileTypes.includes(file.type)) {
      toast.error('Image must be a jpeg, png, or webp')
      return
    }

    // Prepare the input for the mutation
    const input = {
      imageCategory: 'PROFILE',
      MIMEType: file.type,
    }
    // Get a presigned policy and use it to upload the file to S3
    getS3ParamsAndUpload({ variables: { input } })
  }

  return (
    <form
      onSubmit={onSubmit}
      className="flex h-full w-full items-center justify-center"
    >
      <input
        type="file"
        name="file"
        className="hidden"
        ref={fileInputRef}
        onChange={handleFileChange}
      />
      <button
        className="flex items-center gap-4 text-teal-500
        sm:text-2xl  md:text-4xl"
        onClick={() => {
          fileInputRef.current.click()
        }}
        // here we handle a file being dropped into the dropzone
        onDrop={handleDrop}
      >
        <i className="fa fa-upload  flex min-h-fit min-w-fit items-center justify-center rounded-full border-4 border-teal-500 p-3 sm:h-16 sm:w-16   md:h-24 md:w-24  lg:h-32 lg:w-32" />
        Upload image
      </button>
      {/* <input type="submit" value="Upload" disabled={loading} /> */}
    </form>
  )
}

export default UploadProfileImageForm

GQL

I added another mutation to the boilderplate sdl for my images field aptly called uploadImage as below:

  input UploadImageInput {
    imageCategory: ImageCategory #for conditional validation and save handling
    MIMEType: String #for conditional file extension validation
  }

  type UploadImageResponse {
    url: String
    fields: JSON
  }

  enum ImageCategory {
    PROFILE
  }

  type Mutation {
   #... existing crud from boilerplate
    uploadImage(input: UploadImageInput!): UploadImageResponse! @requireAuth
  }

Lib/Service

As an overview, this service handles:

  • Checking that the user is authorised to upload the image,
  • Building/requesting the policy headers
  • Responding to the client with the policy headers
  • Saving the image object including it’s shiny new/updated url.

This is separated into a lib
Apologies in advance for the birdnest of code.

// aws.ts lib file
import { S3Client } from '@aws-sdk/client-s3'
import { createPresignedPost } from '@aws-sdk/s3-presigned-post'
import { v4 as uuidv4 } from 'uuid'

const s3Client = new S3Client({
  region: process.env.AWS_S3_REGION,
  credentials: {
    accessKeyId: process.env.AWS_S3_ACCESS_KEY_ID,
    secretAccessKey: process.env.AWS_S3_SECRET_ACCESS_KEY,
  },
})

export async function generatePresignedPost({
  maxSize = 600000, // default to 600kb
  key = `default/${uuidv4()}`, // default to a uuid in the defaults folder
  Expires = 300, // supposedly time in seconds from now - but something is wrong in my dev env. The actual default is 3600 seconds.
}) {
  // Define the conditions for the presigned URL
  const Conditions = [
    { key: key },
    ['content-length-range', 0, maxSize],
    { acl: 'public-read' },
    // { 'content-type': 'image/webp' }, // haven't got this working even manually??
  ]

  // Create a presigned POST
  return createPresignedPost(s3Client, {
    Bucket: process.env.AWS_S3_BUCKET_NAME,
    Key: key,
    Conditions,
    Fields: { acl: 'public-read' },
    Expires,
  })
  // this returns an object with the following fields:
  // url: string
  // fields: { [key: string]: string } // a collection of headers as described in the docs
}
//images.ts service
import type {
  QueryResolvers,
  MutationResolvers,
  ImageRelationResolvers,
} from 'types/graphql'

import { useRedwoodDirective } from '@redwoodjs/graphql-server'

import { requireAuth } from 'src/lib/auth'
import { generatePresignedPost } from 'src/lib/aws'
import { db } from 'src/lib/db'
import { logger } from 'src/lib/logger'

export const uploadImage = async ({ input }) => {
  // input is in form: { imageCategory: 'PROFILE', headers: { ... } }
  logger.info(`Upload Photo Input: ${JSON.stringify(input)}`)
  //check user is authenticated
  requireAuth()

  // conditionally validate input and save based on type provided by client
  switch (input.imageCategory) {
    case 'PROFILE': {
      // validate the image headers. File should be one of "image/jpeg", "image/png", "image/webp", "image/gif", "image/bmp"
      let extension = ''
      switch (input.MIMEType) {
        case 'image/jpeg':
          extension = 'jpg'
          break
        case 'image/png':
          extension = 'png'
          break
        case 'image/webp':
          extension = 'webp'
          break
        case 'image/gif':
          extension = 'gif'
          break
        case 'image/bmp':
        default:
          throw new Error(`Unsupported image type: ${input.MIMEType}`)
      }

      // generate AWS params. Note pps is short for profile pictures
      const key = `uploads/pps/p${context.currentUser.id}.${extension}` // filename and path

      const presignedPostParams = await generatePresignedPost({
        maxSize: 600000,
        key: key,
        Expires: 1000, // hmmmm.... this is supposed to be seconds
      })

      const uploadURL = `https://${process.env.AWS_S3_BUCKET_NAME}.s3.${process.env.AWS_S3_REGION}.amazonaws.com/`
      const publicFileUrl = `${uploadURL}${key}`

      // update the user's profile picture by checking the user's profile picture and updating it if it exists
      // this isn't really ideal in the event of a s3 upload failure, but it's a start
      // the logic below is as follows: if the user has a profile picture, update it. If not, create one and then update the user to link to it
      // we just let it get handled asynchonously because the client doens't really have to handle an error here.
      // It's not a big deal if the image doesn't get updated most of the time because it will be the same image path
      // unless the image hosting architecture/naming convention changes.
    await db.user
        .findUnique({
          where: { id: context.currentUser.id },
          include: { profilePicture: true },
        })
        .then(async (user) => {
          logger.info('user:' + JSON.stringify(user))
          if (user.profilePicture) {
            // if the user has a profile picture already
            if (user.profilePicture.url != publicFileUrl) {
              // if the user's profile picture path is the same (should be most of the time), don't do anything
              logger.info('updating profile picture')

              await db.image.update({
                data: { url: publicFileUrl },
                where: { id: user.profilePictureId },
              })
            }
          } else {
            await db.image
              .create({
                data: {
                  url: publicFileUrl,
                  createdBy: { connect: { id: context.currentUser.id } },
                },
              })
              .then(async (image) => {
                await db.user.update({
                  data: {
                    profilePicture: { connect: { id: image.id } },
                  },
                  where: { id: context.currentUser.id },
                })
              })
          }
        })
      // logger.info('ppp:', presignedPostParams)
      return presignedPostParams
    }
    default: {
      // raise an error
      throw new Error('Unsupported image category')
    }
  }
}

export const Image: ImageRelationResolvers = {
  createdBy: (_obj, { root }) => {
    return db.image.findUnique({ where: { id: root?.id } }).createdBy()
  },
  userProfiles: (_obj, { root }) => {
    return db.image.findUnique({ where: { id: root?.id } }).userProfiles()
  },
}


Use Image

Use a redwood cell to retrieve the required images url and render it with an tag.

3. Quirks, Limitations and stuff that’s not quite right.

I’m not so confident on any part of this stack cause it only kinda works some of the time. If anyone sees any glaring fixes. Let me know

Policy Expired Error

For the love of me, I can’t work out how the policy expiration works - using the presigned post function from the aws sdk in my dev environment, the expiry time isn’t in Zulu time or my local time causing the requests to my s3 bucket to be denied as expired - despite an ample expiration window.

MIMEType validation error.

I originally was trying to build the heads manually (without the @aws-sdk/s3-presigned-post methods) and I could get the MIMEType Validation working, but since then, i can’t seem to get the MIMEType of the uploaded files to be validated by AWS - it says they’re mismatching even when I manually set the policy to match the console logged file type header value.?

4. Reference/Docs:

Signed Posts and Post Policy:
Browser-Based Uploads Using POST (AWS Signature Version 4) - Amazon Simple Storage Service
SDK API:
AWS SDK for JavaScript v3 (amazon.com)

3 Likes

Hi there,

File uploads to S3 is something I’ve solved for pretty extensively in redwood, so hopefully I can lend some help here.

A few questions off the bat -

  1. Is there a reason/constraint why you’re uploading the image directly from the browser? You submit the form directly to S3 but then you make an API call in order to update your user anyway, which limits the benefits you’d see from browser based uploads since you need to hit your api to validate afterwards.
  2. Are you in a serverless or serverful environment? My approach at a high level has been browser receives fileencode file to a data url using FileReader::readAsDataURL()pass data url to apiconvert data url to buffersend buffer to s3 with PutObjectCommandgenerate and return presigned url from s3
    But in my case I’ve always used serverful hosting, so there may be some serverless constraint to my approach that I’m not considering
  3. Regarding your mimeType issue - I was just reading the docs for create Presigned Post conditions and was wondering if you had tried the starts with condition for image types documented there - ["starts-with", "$Content-Type", "image/"]. If that still doesn’t work, it could be related to the redwood version you’re running, as in lower versions there were some issues with the fastify server not interpreting mimetypes and formData requests properly.

A simplified example of my approach:

Front End

const UPLOAD = gql`
#The Upload type here is a scalar we add to the graphql schema
  mutation upload($file: Upload!, $name: String!) { 
    upload(input: { file: $file, name: $name }) {
      fileId
      signedUrl
    }
  }
`;

...

const [upload] = useMutation(UPLOAD, {
    refetchQueries: [{ query: QUERY }],
    awaitRefetchQueries: true
  });

...
//this example implementation is for react-dropzone's useDropzone hook, but however you gain access to the file in the browser should be similar
const onDropAccepted=async (files) => {
          new Promise((resolve, reject) => {
            const reader = new FileReader();
            reader.readAsDataURL(files[0]);
            reader.onload = () =>
              resolve({
                file: reader.result,
                name: files[0].name
              });
            reader.onerror = (error) => reject(error);
          }).then(({ file, name }: any) => {
            upload({
              variables: {
                file: file,
                name: name
              }
            })
          });
        }

Back End

upload.sdl.ts

input UploadInput {
    name: String
    file: Upload
  }

type UploadPayload {
    fileId: String!
    signedUrl: String!
  }

type Mutation {
    upload(input: UploadInput!): UploadPayload!
      @requireAuth
  }

scalars.sdl.ts

export const schema = gql`
  scalar Upload
`;

api/src/lib/s3.ts

import { S3Client } from '@aws-sdk/client-s3';

const client = new S3Client({
  credentials: {
    accessKeyId: process.env.AWS_ACCESS_KEY_ID,
    secretAccessKey: process.env.AWS_ACCESS_KEY_SECRET
  },
  region: process.env.AWS_REGION
});

export default client;

api/src/services/upload.ts

import { GetObjectCommand, PutObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
import { v4 as uuidv4 } from 'uuid';

import s3Client from 'src/lib/s3';

export const upload = async (
  { input }: UploadInput
) => {
  const { name, file } = input;
  const buff = Buffer.from(file.split(',')[1], 'base64');
  const fileId = uuidv4();
  const command = new PutObjectCommand({
    Body: buff,
    Bucket: process.env.AWS_BUCKET_NAME,
    Key: fileId,
    ContentType: 'application/*' //this example isn't enforcing mimetype, but you can pull that data from the file string if necessary with file.split(',')[0] https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/Data_URLs
  });
  await s3Client.send(command);
  const GetCommand = new GetObjectCommand({
    Bucket: process.env.AWS_BUCKET_NAME,
    Key: fileId
  });
  const signedUrl = await getSignedUrl(s3Client as any, GetCommand as any, {
    expiresIn: 86400 //extended expiry time here, set to whatever you application requires
  });

  return {
    fileId,
    signedUrl
  };
};

After that we just save the fileId in the relevant spot in the database, and from that we can regenerate a new signedUrl whenever we need it using a new GetObjectCommand

Note that this approach also requires your api/server.config.js to have a bodyLimit set that can handle the size of the file you’re passing to the redwood api.

Let me know if that all makes sense - I’m happy to help where I can and share more from the approach I’ve taken as well (this may be a little over simplified), but again my experience has been more geared to serverful hosting if you have specific reasons you need to send directly from the browser to s3.

Hi Tyler,
Thanks for the reply. To answer your questions (albeit not in order).
2. Yeah my apps use serverless architecture. Which (at least as I understand) means I have a relatively short time limit per invocation, but more importantly, I pay for bandwidth (Is my mistake thinking the buffer is the whole file?).
3. Yeah i’ve tried the “starts-with” and key/value syntaxes - strangely when i was logging the received header, I was getting some non-image value (something something octet, i can’t recall atm)

  1. Essentially, I don’t want to pay for additional bandwidth transferring the file through my api; or cpu validating it (especially if the user bypasses client side validation and uploads a massive file). I’ve taken this approach because I wish to securely outsource this the file transfer and validation to S3 and the client. Plus, heck, must be a little tiny bit better for the environment without intermediating the bulk of the data haha.

Ahh, gotcha. The bandwidth would definitely be a constraint, as posting the file to your API would consume at least as much bandwidth as the file size. The buffer is the whole file yeah, you could play around with different encodings and compression but it’ll more or less be the same size by the time it gets to the server.

For the content-type header you got it was likely application/octet-stream which is what the browser sets when it receives a file that it doesn’t know the type of. Maybe in your fetch request in the onCompleted of your resolver you could set a the header ahead of time, so the browser knows what to tell AWS about the file. Something like

 const response = await fetch(url, {
          method: 'POST',
          body: formData,
          headers:{
            'Content-Type': 'image/*',
          }
        })

To make it more explicit in your request what you’re sending. You may have better luck with 'Content-Type':'multipart/formdata' as well. Maybe also confirm that the other headers being returned by your createPresignedPost are getting mapped over to the fetch request via the formData in your browser’s network tab - if they aren’t coming over automatically you may need to explicity add the headers to the fetch request.
Let me know if that fixes anything!

Ok for anyone scared off by the policy expiration issue. It’s actually a non-issue with respect to S3.
(It was an environment issue caused by a drift in local machine time inside WSL, which in turn caused a nonsense token expiry to be requested)
(Unfortunately, I can’t seem to just edit it out of the original post)

Hey thanks for this nice discussion!

For convenience on reading objects from S3, we could also use a transformer directive on the graphql queries:

type Image  {
 id: String!
 url: String! @storageUrl
}

Then within the transformer we could check if we have a normal url, linking to external resources or a storage url and getting a presigned url for reading it.

const transform: TransformerDirectiveFunction = async ({context, resolvedValue}) =>{
// if url is something like https:/....image.jpg then we just return it
if(isUrl(resolvedValue)){ // i.e. isUrl regex: https://gist.github.com/dperini/729294
  return resolvedValue
}
// or else we assume url is a s3 path /my/path/to/image.jpg so we can send a presigned get url
const presignedUrl = storageClient.presignedUrl('GET', resolvedValue, conditions)

return presignedUrl
}

In this way we can avoid calling the Get request in the service layer.
But for putting I think I have used a similar approach like littletuna4, doing the upload via browser.

1 Like

Updating the code of this post.
Also allows arrays and considers expiration time. So it can be used for caching, too.

I am wondering if this might be useful for the File Scalar of the new Upload functionality @danny ? So it would not be necessary to call the withSignedUrl call.
In my case there is the option to store a url or a storage path. So also external urls can be used.
I guess the File Scalar needs to have a file though for graphql. So this does not make it possible?

import { createTransformerDirective, TransformerDirectiveFunc } from '@redwoodjs/graphql-server'

import { logger } from 'src/lib/logger'
import { storageClient } from 'src/lib/storageClient'
import { isUrl } from 'src/utils/isUrl'
export const schema = gql`
  """
  Use @storageUrl to transform the resolved value to return a modified result.
  """
  directive @storageUrl on FIELD_DEFINITION
`

const transform: TransformerDirectiveFunc = async ({ context, resolvedValue }) => {

  if (isUrl(resolvedValue) && !Array.isArray(resolvedValue)) {
    logger.debug(
      { custom: { resolvedValue } },
      ' resolvedValue in storageUrl directive is already a URL'
    )
    return resolvedValue
  }
  // could also be provided as directiveArg
  const expirationTime = 60 * 60 // 1h

  const presignedUrls = await (Array.isArray(resolvedValue)
    ? Promise.all(
        resolvedValue.map(async (v) => {
          if (isUrl(v)) {
            return v
          }
          const version = await storageClient.getLatestVersionId(v)
          return storageClient.presignedUrl('GET', v, expirationTime, {
            versionId: version ? version : '',
          })
        })
      )
    : storageClient.getLatestVersionId(resolvedValue).then((version) => {
        return storageClient.presignedUrl('GET', resolvedValue, expirationTime, {
          versionId: version ? version : '',
        })
      })
  ).catch((e) => {
    logger.error(resolvedValue, resolvedValue.toString() + ' Error in storageUrl directive: ' + e)
  })

  logger.debug({ custom: { resolvedValue } }, ' resolvedValue in storageUrl directive')
  logger.debug({ custom: { presignedUrls } }, ' transformedValue in storageUrl directive')

  return presignedUrls
}

const storageUrl = createTransformerDirective(schema, transform)

export default storageUrl
``