upload file to s3 from lambda node
AWS S3 is i of the many services provided past Amazon Web Services (AWS), which allows you lot to store files, well-nigh of you probably already know. On the other paw, AWS Lambda is ane of the virtually revolutionary services of our day, although the proper noun may sound very intimidating, AWS Lambda is a calculating platform that autonomously manages the computing resources required by the developed lawmaking and can execute lawmaking to any type of application or back-end service, the purpose of this service is to simplify the creation of applications, considering it is not necessary to provision or manage servers, since AWS Lambda too takes care of everything necessary to run and scale your lawmaking with high availability, in addition you pay on need, that is, for the processing time involved in executing the code.
The purpose of this post is to explicate how to develop a dorsum-stop service, without a server serverless, to upload images (original and thumbnail), using the framework called serverless past the way was adult by the Coca Cola company, for the purpose of creating serverless applications even faster; co-ordinate to Wikipedia:
Serverless Framework is a costless, open source spider web framework written with Node.js. Serverless is the first framework developed to build applications on AWS Lambda, a serverless calculating platform provided by Amazon as function of Amazon Spider web Services.
In the adjacent few steps, I'll walk you through building a serveless based application, allowing image processing and uploading, on AWS S3, if you'd rather go straight to the lawmaking, here it is.
Note: It is not recommended to utilise Lambdas for file uploads due to certain limitations of Api Gateway and Lambdas, if despite this you still want information technology, this blog is for you.
Required Tools
- Node JS 12
- Serverless
- AWS CLI
1. Install AWS CLI (Command Line Interface)
AWS CLI, is a unified tool for managing AWS services, it is a tool that allows y'all to control multiple AWS services from the command line. One time downloaded, add your profile with your respective AWS account and credential.
ii. Install the serverless framework
Here is a link that explains this procedure in detail, https://serverless.com/framework/docs/getting-started/.
3. Run the following control to generate sample code with serverless.
First you need to create a folder, example: serveless-upload-image .
sls create --template howdy-world
The above command will create the following files:
- serverless.yml
- handler.js
In the serverless.yml file, you will notice all the information for the resources required by the developed lawmaking, for example the infrastructure provider to exist used such equally AWS, Google Cloud or Azure, the database to be used, the functions to exist displayed, the events to be heard, the permissions to access each of the resources, among other things.
The handle.js file contains the generated hello-world code, which is a simple function that returns a JSON certificate with status 200 and a message. We will rename this file to fileUploaderHome.js.
4. Install dependencies
npm init -y npm install busboy && uuid && jimp && aws-sdk
Since handling files is required, the client will ship a POST request, encoding the trunk in multipart/course-data format, to decode that format, for which nosotros will employ the busboy library. In addition, information technology is necessary to make a thumbnail of the images, Jimp will be installed, also the library chosen uuid, to generate a unique identifier for the images, finally, the AWS SDK provides JavaScript objects to manage AWS services, such as Amazon S3, Amazon EC2, DynamoDB among others.
v. Create the function to decode the multipart/form-data
//formParser.js const Busboy = require ( ' busboy ' ); module . exports . parser = ( upshot , fileZise ) => new Promise (( resolve , reject ) => { const busboy = new Busboy ({ headers : { ' content-type ' : event . headers [ ' content-type ' ] || result . headers [ ' Content-Type ' ] }, limits : { fileZise } }); const result = { files : [] }; busboy . on ( ' file ' , ( fieldname , file , filename , encoding , mimetype ) => { const uploadFile = {} file . on ( ' data ' , data => { uploadFile . content = data }); file . on ( ' end ' , () => { if ( uploadFile . content ) { uploadFile . filename = filename uploadFile . contentType = mimetype uploadFile . encoding = encoding uploadFile . fieldname = fieldname outcome . files . button ( uploadFile ) } }) }) busboy . on ( ' field ' , ( fieldname , value ) => { result [ fieldname ] = value }); busboy . on ( ' mistake ' , fault => { reject ( error ) }) busboy . on ( ' finish ' , () => { resolve ( event ); }) busboy . write ( event . body , upshot . isBase64Encoded ? ' base64 ' : ' binary ' ) busboy . end () })
6. Function that will procedure and upload the images to S3
Below is the pace-by-step code that will allow to process the original image and thumbnail to be uploaded to S3.
//fileUploaderHome.js " use strict " ; const AWS = crave ( " aws-sdk " ) const uuid = crave ( " uuid/v4 " ) const Jimp = require ( " jimp " ) const s3 = new AWS . S3 () const formParser = require ( " ./formParser " ) const saucepan = process . env . Bucket const MAX_SIZE = 4000000 // 4MB const PNG_MIME_TYPE = " epitome/png " const JPEG_MIME_TYPE = " image/jpeg " const JPG_MIME_TYPE = " paradigm/jpg " const MIME_TYPES = [ PNG_MIME_TYPE , JPEG_MIME_TYPE , JPG_MIME_TYPE ] module . exports . handler = async event => { endeavour { const formData = look formParser . parser ( event , MAX_SIZE ) const file = formData . files [ 0 ] if ( ! isAllowedFile ( file . content . byteLength , file . contentType )) getErrorMessage ( " File size or type not allowed " ) const uid = uuid () const originalKey = ` ${ uid } _original_ ${ file . filename } ` const thumbnailKey = ` ${ uid } _thumbnail_ ${ file . filename } ` const fileResizedBuffer = expect resize ( file . content , file . contentType , 460 ) const [ originalFile , thumbnailFile ] = expect Promise . all ([ uploadToS3 ( bucket , originalKey , file . content , file . contentType ), uploadToS3 ( bucket , thumbnailKey , fileResizedBuffer , file . contentType ) ]) const signedOriginalUrl = s3 . getSignedUrl ( " getObject " , { Saucepan : originalFile . Bucket , Key : originalKey , Expires : 60000 }) const signedThumbnailUrl = s3 . getSignedUrl ( " getObject " , { Bucket : thumbnailFile . Bucket , Primal : thumbnailKey , Expires : 60000 }) render { statusCode : 200 , torso : JSON . stringify ({ id : uid , mimeType : file . contentType , originalKey : originalFile . fundamental , thumbnailKey : thumbnailFile . key , bucket : originalFile . Bucket , fileName : file . filename , originalUrl : signedOriginalUrl , thumbnailUrl : signedThumbnailUrl , originalSize : file . content . byteLength }) } } take hold of ( e ) { return getErrorMessage ( e . bulletin ) } }
-
The resize function (file.content, file.contentType, 460), volition exist explained in particular later, withal in this line a thumbnail image is generated from the original paradigm, with a width of 460 px, and a superlative determined automatically, this function receives the binary content of the original file, the blazon of the file and the size at which the thumbnail image volition be generated. The await keyword will wait for the image resizing to stop processing to continue to the next line.
-
The uploadToS3 function receives 3 parameters, the bucket to which information technology will be uploaded, the key (key) of the file, the content in binary and the file blazon, and returns a promise, later on what this part does will be explained in detail.
-
Once we take original and the thumbnail file, it is uploaded to S3, in parallel with Hope.all(...) , when it finishes uploading all files it returns an array with the information of each file that has been uploaded. And so the signed url *(getSignedUrl)** is obtained, with a specified expiration time, using the AWS S3 client.
This function, finally in example everything is executed successfully, returns a JSON, with the information of the candy images.
In the following block, each 1 of the commonsensical functions used from the previous code block is detailed.
const getErrorMessage = bulletin => ({ statusCode : 500 , torso : JSON . stringify ( message })}) const isAllowedFile = ( size , mimeType ) => { // some validation code } const uploadToS3 = ( bucket , key , buffer , mimeType ) => new Hope (( resolve , reject ) => { s3 . upload ( { Bucket : bucket , Key : cardinal , Body : buffer , ContentType : mimeType }, function ( err , data ) { if ( err ) reject ( err ); resolve ( data ) }) }) const resize = ( buffer , mimeType , width ) => new Promise (( resolve , reject ) => { Jimp . read ( buffer ) . then ( image => prototype . resize ( width , Jimp . Auto ). quality ( lxx ). getBufferAsync ( mimeType )) . then ( resizedBuffer => resolve ( resizedBuffer )) . catch ( error => reject ( error )) })
Well, and so far we have reviewed each of the code blocks that allow image processing, validation and uploading to S3, however, the control file serverless.yml of the serverless framework needs to be covered, which allows united states to detail the resources , service definitions, roles, settings, permissions, and more for our service.
#serverles.yml service : file-UploaderService-foqc-home custom : bucket : lambda-exam-foqc-file-abode provider : proper noun : aws runtime : nodejs12.x region : united states of america-east-1 stackName : fileUploaderHome apiGateway : binaryMediaTypes : - ' */*' iamRoleStatements : - Outcome : " Allow" Activeness : - " s3:PutObject" - " s3:GetObject" Resources : - " arn:aws:s3:::${self:custom.bucket}/*" functions : UploadFileHome : handler : fileUploaderHome.handler events : - http : path : upload method : post cors : true environment : Bucket : ${cocky:custom.saucepan} resources : Resources : StorageBucket : Type : " AWS::S3::Bucket" Properties : BucketName : ${self:custom.bucket}
-
service, refers to a project, is the proper noun with which it will be deployed.
-
custom, this section allows defining variables that can be used at diverse points in the certificate, centralizing the values for development or deployment, therefore we add the bucket variable, with the value lambda-test-foqc-file-home , this value will be used to define the bucket in which the files will be stored.
-
Provider, in this section the provider, the infrastructure and the respective permissions of resources is defined. Equally mentioned at the beginning of this blog, the provider to use is Amazon Spider web Services (aws), NodeJs 12, region in which it volition be deployed is in the eastern U.s., the default proper name of the CloudFormation stack (fileUploaderHome), however information technology is not required.
The following line is of import, to let our Api Gateway support binary files; Information technology is mandatory to declare the department apiGateway which has as one of its values '* / *', which is a wildcard that defines, that any binary format, such as multipart/grade-data, will exist accustomed. And then the permissions (iamRoleStatements) are divers, to allow admission to S3 bucket, defined in the customization department ${self.custom.bucket}. -
Functions, this section defines each of the implementations of functions every bit services (Faas), it is a minimum unit of deployment, a service can be equanimous of several functions, and each of these must fulfill a single task, although it is just a recommendation. Each role must have a specific configuration, otherwise information technology will inherit 1 by default.
The name of our function volition exist the following, UploadFileHome, which is invoked from an HTTP Post event in the path that is fired on demand and allows CORS, this event will be handled by our handler function that has already been implemented in the file *fileUploaderHome. -
Resources, finally in this department the resources to exist used by each of the functions, divers higher up, are divers. The storage bucket (StorageBucket) is defined, which has the blazon (Blazon: 'AWS :: S3 :: Saucepan') and in the belongings the name of the bucket (BucketName).
Finally! We have finished building our service, which uploads an image and its thumbnail to S3, so it is time to deploy the service, with the post-obit command.
sls deploy --stage = examination
At the end of the deployment, the url of our service will be displayed, test its operation using postman, equally shown in the image.
If the image uploading was successful, the service volition return a JSON, with the information of the processed image, such as the key, the name, the url of the original file and the thumbnail.
To conclude, in case you need to remove the service, run the following command.
sls remove --stage = test
Conclusions
This service can exist used on need by any external awarding or service, since it is not coupled to any concern logic, in addition the code tin exist refactored so information technology can upload files in general, not only images, it could also receive as part of the http postal service event, the directory (path) of the bucket where you want to store the file, avoiding having a fixed directory. All the same, in a didactic style, information technology serves as a basis for creating a more robust and configurable service.
It has taken me several days to document and write this post, I am satisfied and I hope that this information has been useful for yous.
Thank you!
Source: https://dev.to/foqc/uploading-images-to-aws-s3-with-serverless-1ae0
0 Response to "upload file to s3 from lambda node"
Post a Comment