Hi all,
Here’s a clunky run down of a similarly clunky setup I’ve kinda got working to host images/files with S3 using post-policy. Which allows you to do some simple backend validation using only S3 - no lambdas, no image processing in your api.
This post is 1 part showcase of s3 file storage with presigned post-policy and 1 part cry for help.
I’m going to assume you have an AWS account already, but otherwise I hope to make this exhaustive.
Upload Image full stack flow description.
- User submits the form with a file field. In my case - I set it to auto submit onChange.
- The client sends a uploadImage GQL request to the api which procures/builds and returns a set of signed post request headers. These include policy which can limit file types, file sizes, object name/path (aka key), an expiry and a few other parameters.
- The client takes the returned headers including the signature, and submits them with their file to your S3 Bucket.
- AWS S3 checks if the provided policy header match the signature and if so, checks the file adheres to the policy and if so, uploads them.
- Additionally, you’d probably want the client to tell your backend that the upload to AWS was successful/unsuccessful so you can conditionally handle the your images entity in the db.
1. Architecture/Set Up
IAM User
Set up a dedicated IAM user. I used AWS’ predefined AmazonS3FullAccess
Permission Policy. For the purposes of this run through - lets call it myIamUserName
Save its credentials in your environment variables.
S3 Bucket
Set up an S3 bucket for your images/files. For the purposes of this run through, lets call it myBucketName
Unblock public access as below (no boxes checked).
Set your CORs policy to allow POST traffic from your domain either a *
wildcard for all domains (good for dev); or your production domain/s.
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"POST",
"PUT"
],
"AllowedOrigins": [
"*",
"www.mydomain.com"
],
"ExposeHeaders": [],
"MaxAgeSeconds": 3000
}
]
Also, you need to set up your bucket policy - I’m using wildcards for locations, but you could get restrictive if you were so inclined:
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAllGet",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::myBucketName/*"
},
{
"Sid": "AllowPostIfAuth",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:user/myIamUserName"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::myBucketName/*"
}
]
}
Note I’m using 123456789012 as a standin for my account id.
Full discretion: I’m not not tooo sure if the AllowPostIfAuth is required. or if the SID names I’ve chosen are ok. - My point is that this part is probably not perfect
Finally you need to make sure ACL is enabled - I got stuck with this off for a while.
2. Redwood Implementation
Dependencies/Installations
For the aws sdk apis that I used, run this command:
yarn workspace api add @aws-sdk/s3-presigned-post
and
yarn workspace api add @aws-sdk/client-s3
DB/Schema
I added a model for images to my primsa.schema as below:
model User {
//...
profilePicture Image? @relation(fields: [profilePictureId], references: [id], name: "profilePicture")
profilePictureId Int?
Images Image[] @relation(name: "createdBy")
//...
}
model Image {
id Int @id @default(autoincrement())
url String
createdAt DateTime @default(now())
createdBy User @relation(fields: [createdById], references: [id], name: "createdBy")
createdById Int
userProfiles User[] @relation(name: "profilePicture") // should only be one long at max. This is only for the user's profile picture
}
Front End Form
The front end form requests a signed post policy, and submits it with the file to the s3 bucket.
//Upload image form
import { useRef } from 'react'
import { gql, useMutation } from '@apollo/client'
import { toast } from '@redwoodjs/web/dist/toast'
const GQL_MUTATION = gql`
mutation UploadImage($input: UploadImageInput!) {
uploadImage(input: $input) {
url
fields
}
}
`
const UploadProfileImageForm = ({ runOnSuccess }) => {
const fileInputRef = useRef(null)
const [getS3ParamsAndUpload, { loading, error }] = useMutation(GQL_MUTATION, {
onCompleted: async (data) => {
const { url, fields } = data.uploadImage
const file = fileInputRef.current.files[0]
console.log('Fields', JSON.stringify(fields))
const formData = new FormData()
Object.entries(fields).forEach(([key, value]) => {
formData.append(key.replace(/_/g, '-'), value as string)
})
formData.append('file', file)
try {
const response = await fetch(url, {
method: 'POST',
body: formData,
})
if (response.ok) {
toast.success('Image uploaded successfully')
// runOnSuccess()
} else {
toast.error('Failed to upload image')
}
} catch (error) {
toast.error('An error occurred during upload')
}
},
onError: (error) => {
toast.error(error.message)
},
})
const handleFileChange = (e) => {
// Trigger the form submission when file is selected
onSubmit(e)
}
const handleDrop = (e) => {
// Prevent default behavior when a file is dropped
e.preventDefault()
// Trigger the form submission when file is dropped
onSubmit(e)
}
const onSubmit = async (e) => {
e.preventDefault() // Prevent default form submission behavior
const file = fileInputRef.current.files[0]
if (!file && fileInputRef.current.value) {
toast.error('Please select a file')
return
}
if (file.size > 600000) {
toast.error('Image must be less than 600kb')
return
}
const fileTypes = ['image/jpeg', 'image/png', 'image/webp', 'image/bmp']
if (!fileTypes.includes(file.type)) {
toast.error('Image must be a jpeg, png, or webp')
return
}
// Prepare the input for the mutation
const input = {
imageCategory: 'PROFILE',
MIMEType: file.type,
}
// Get a presigned policy and use it to upload the file to S3
getS3ParamsAndUpload({ variables: { input } })
}
return (
<form
onSubmit={onSubmit}
className="flex h-full w-full items-center justify-center"
>
<input
type="file"
name="file"
className="hidden"
ref={fileInputRef}
onChange={handleFileChange}
/>
<button
className="flex items-center gap-4 text-teal-500
sm:text-2xl md:text-4xl"
onClick={() => {
fileInputRef.current.click()
}}
// here we handle a file being dropped into the dropzone
onDrop={handleDrop}
>
<i className="fa fa-upload flex min-h-fit min-w-fit items-center justify-center rounded-full border-4 border-teal-500 p-3 sm:h-16 sm:w-16 md:h-24 md:w-24 lg:h-32 lg:w-32" />
Upload image
</button>
{/* <input type="submit" value="Upload" disabled={loading} /> */}
</form>
)
}
export default UploadProfileImageForm
GQL
I added another mutation to the boilderplate sdl for my images field aptly called uploadImage
as below:
input UploadImageInput {
imageCategory: ImageCategory #for conditional validation and save handling
MIMEType: String #for conditional file extension validation
}
type UploadImageResponse {
url: String
fields: JSON
}
enum ImageCategory {
PROFILE
}
type Mutation {
#... existing crud from boilerplate
uploadImage(input: UploadImageInput!): UploadImageResponse! @requireAuth
}
Lib/Service
As an overview, this service handles:
- Checking that the user is authorised to upload the image,
- Building/requesting the policy headers
- Responding to the client with the policy headers
- Saving the image object including it’s shiny new/updated url.
This is separated into a lib
Apologies in advance for the birdnest of code.
// aws.ts lib file
import { S3Client } from '@aws-sdk/client-s3'
import { createPresignedPost } from '@aws-sdk/s3-presigned-post'
import { v4 as uuidv4 } from 'uuid'
const s3Client = new S3Client({
region: process.env.AWS_S3_REGION,
credentials: {
accessKeyId: process.env.AWS_S3_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_S3_SECRET_ACCESS_KEY,
},
})
export async function generatePresignedPost({
maxSize = 600000, // default to 600kb
key = `default/${uuidv4()}`, // default to a uuid in the defaults folder
Expires = 300, // supposedly time in seconds from now - but something is wrong in my dev env. The actual default is 3600 seconds.
}) {
// Define the conditions for the presigned URL
const Conditions = [
{ key: key },
['content-length-range', 0, maxSize],
{ acl: 'public-read' },
// { 'content-type': 'image/webp' }, // haven't got this working even manually??
]
// Create a presigned POST
return createPresignedPost(s3Client, {
Bucket: process.env.AWS_S3_BUCKET_NAME,
Key: key,
Conditions,
Fields: { acl: 'public-read' },
Expires,
})
// this returns an object with the following fields:
// url: string
// fields: { [key: string]: string } // a collection of headers as described in the docs
}
//images.ts service
import type {
QueryResolvers,
MutationResolvers,
ImageRelationResolvers,
} from 'types/graphql'
import { useRedwoodDirective } from '@redwoodjs/graphql-server'
import { requireAuth } from 'src/lib/auth'
import { generatePresignedPost } from 'src/lib/aws'
import { db } from 'src/lib/db'
import { logger } from 'src/lib/logger'
export const uploadImage = async ({ input }) => {
// input is in form: { imageCategory: 'PROFILE', headers: { ... } }
logger.info(`Upload Photo Input: ${JSON.stringify(input)}`)
//check user is authenticated
requireAuth()
// conditionally validate input and save based on type provided by client
switch (input.imageCategory) {
case 'PROFILE': {
// validate the image headers. File should be one of "image/jpeg", "image/png", "image/webp", "image/gif", "image/bmp"
let extension = ''
switch (input.MIMEType) {
case 'image/jpeg':
extension = 'jpg'
break
case 'image/png':
extension = 'png'
break
case 'image/webp':
extension = 'webp'
break
case 'image/gif':
extension = 'gif'
break
case 'image/bmp':
default:
throw new Error(`Unsupported image type: ${input.MIMEType}`)
}
// generate AWS params. Note pps is short for profile pictures
const key = `uploads/pps/p${context.currentUser.id}.${extension}` // filename and path
const presignedPostParams = await generatePresignedPost({
maxSize: 600000,
key: key,
Expires: 1000, // hmmmm.... this is supposed to be seconds
})
const uploadURL = `https://${process.env.AWS_S3_BUCKET_NAME}.s3.${process.env.AWS_S3_REGION}.amazonaws.com/`
const publicFileUrl = `${uploadURL}${key}`
// update the user's profile picture by checking the user's profile picture and updating it if it exists
// this isn't really ideal in the event of a s3 upload failure, but it's a start
// the logic below is as follows: if the user has a profile picture, update it. If not, create one and then update the user to link to it
// we just let it get handled asynchonously because the client doens't really have to handle an error here.
// It's not a big deal if the image doesn't get updated most of the time because it will be the same image path
// unless the image hosting architecture/naming convention changes.
await db.user
.findUnique({
where: { id: context.currentUser.id },
include: { profilePicture: true },
})
.then(async (user) => {
logger.info('user:' + JSON.stringify(user))
if (user.profilePicture) {
// if the user has a profile picture already
if (user.profilePicture.url != publicFileUrl) {
// if the user's profile picture path is the same (should be most of the time), don't do anything
logger.info('updating profile picture')
await db.image.update({
data: { url: publicFileUrl },
where: { id: user.profilePictureId },
})
}
} else {
await db.image
.create({
data: {
url: publicFileUrl,
createdBy: { connect: { id: context.currentUser.id } },
},
})
.then(async (image) => {
await db.user.update({
data: {
profilePicture: { connect: { id: image.id } },
},
where: { id: context.currentUser.id },
})
})
}
})
// logger.info('ppp:', presignedPostParams)
return presignedPostParams
}
default: {
// raise an error
throw new Error('Unsupported image category')
}
}
}
export const Image: ImageRelationResolvers = {
createdBy: (_obj, { root }) => {
return db.image.findUnique({ where: { id: root?.id } }).createdBy()
},
userProfiles: (_obj, { root }) => {
return db.image.findUnique({ where: { id: root?.id } }).userProfiles()
},
}
Use Image
Use a redwood cell to retrieve the required images url and render it with an tag.
3. Quirks, Limitations and stuff that’s not quite right.
I’m not so confident on any part of this stack cause it only kinda works some of the time. If anyone sees any glaring fixes. Let me know
Policy Expired Error
For the love of me, I can’t work out how the policy expiration works - using the presigned post function from the aws sdk in my dev environment, the expiry time isn’t in Zulu time or my local time causing the requests to my s3 bucket to be denied as expired - despite an ample expiration window.
MIMEType validation error.
I originally was trying to build the heads manually (without the @aws-sdk/s3-presigned-post methods) and I could get the MIMEType Validation working, but since then, i can’t seem to get the MIMEType of the uploaded files to be validated by AWS - it says they’re mismatching even when I manually set the policy to match the console logged file type header value.?
4. Reference/Docs:
Signed Posts and Post Policy:
Browser-Based Uploads Using POST (AWS Signature Version 4) - Amazon Simple Storage Service
SDK API:
AWS SDK for JavaScript v3 (amazon.com)