Configure AWS Lambda to Automatically Set Cache Control Headers on S3 Objects - tooltwist/documentation GitHub Wiki
Getting Started
I am assuming you already have an AWS account setup. If not, head over to Amazon AWS to get an account created. Once you are done and verified, go ahead and log into your account.
Navigate to Lambda from the Services menu (It appears under the Compute section).
If you do not have any existing Lambda functions, click the blue button Get Started, or Create a Lambda Function if you have other functions already configured.
Next, you will need to pick a blue print to use for your function. Search for s3-get-object, and select it.
The next page, Configure Event Source, is where we configure what kind of event will trigger our Lambda function to run. I went with the following:
- Bucket: YOUR-BUCKET
- Event type: Object Created (All)
- Prefix: Leave blank for entire bucket, or add a folder within your bucket to target (more folders can be added later).
- Suffix: Leave blank for all file types, or add a specific file type to target (more types can be added later).
- Check Enable trigger (This will trigger the lambda script when an asset was uploaded on your bucket)
The next page is where we configure the actual function and add the code to be run when a trigger is received (which we configured in the previous step). Enter the following:
- Name: Enter a name for your function.
- Description: Enter a description for your function, or leave it as the default, or blank.
- Runtime: Node.js 6.10
- Code entry type: Edit code inline
Copy & paste the following code into the code box provided. Be sure to overwrite all code that is populated by default:
'use strict';
// CONFIGURATION //////////////////////////////////////////////
var CacheControlHeader = 'max-age=31536000';
var ContentEncodingHeader = 'gzip';
var ContentTypeHeader = 'application/javascript';
///////////////////////////////////////////////////////////////
let aws = require('aws-sdk');
let s3 = new aws.S3({ apiVersion: '2006-03-01' });
exports.handler = (event, context, callback) => {
const bucket = event.Records[0].s3.bucket.name;
const key = decodeURIComponent(event.Records[0].s3.object.key.replace('/+/g', ' '));
var params = { Bucket: bucket, Key: key };
s3.getObject(params, (err, data) => {
if (err) {
console.log(err);
var message = 'Error: Failed to get object: s3://'+bucket+'/'+key +'. Make sure it is in the same region as this function!';
console.log(message);
} else {
const mimeHeader = data.ContentType;
if (data.CacheControl != CacheControlHeader) {
var params = { Bucket: bucket, Key: key, CopySource: encodeURIComponent(bucket+'/'+key), CacheControl: CacheControlHeader, ContentEncoding: ContentEncodingHeader, ContentType: ContentTypeHeader, 'Metadata':{}, MetadataDirective: 'REPLACE' };
s3.copyObject(params, (err, data) => {
if (err) {
console.log(err);
message = 'Error: Failed to get object: s3://'+bucket+'/'+key +'. Make sure it is in the same region as this function!';
console.log(message);
} else {
message = 'Metadata updated successfully! OBJECT: s3://'+bucket+'/'+key+' CACHE-CONTROL: '+CacheControlHeader+' CONTENT-ENCODING: '+ContentEncodingHeader+' CONTENT-TYPE: '+ContentTypeHeader;
console.log(message);
}
});
} else {
message = 'Metadata updated successfully! OBJECT: s3://'+bucket+'/'+key+' CACHE-CONTROL: '+CacheControlHeader+' CONTENT-ENCODING: '+ContentEncodingHeader+' CONTENT-TYPE: '+ContentTypeHeader;
console.log(message);
}
}
});
};
Feel free to modify the CONFIGURATION section to suite your specific needs. The CacheControlHeader variable is where you specify the necessary cache-control headers to be applied to all objects.
After the code box, there are a few more options to configure. Here is what I went with:
- Handler: index.handler
- Role: S3 execution role – this will open a new window in AWS IAM, where you will need to configure a user to access your S3 bucket. Use the following:
-
- Role Description: Lambda execution role permissions
-
- IAM Role: lambda_s3_exec_role
-
- Policy Name: Create a new Role Policy
-
- Click the blue Allow button in the bottom right. The window will close, and the Role option will now be populated with the user we just created.
-
- Important: This role should have a AmazonS3FullAccess policy
- Memory (MB): 128
- Timeout: 0 min, 5 sec
- VPC: No VPC
- The final page is our review page. To enable your function immediately, you will want to ensure you select the Enable event source checkbox. Click the blue Create Function button. You should be taken to your newly created Lambda function, on the Event sources tab.
Note: For larger files, you may need to adjust the Timeout option.
So now everytime you upload/update something on your bucket the headers will be automatically set by this lambda script.