AWS ‐ SAM (Serverless Application Model) - FullstackCodingGuy/Developer-Fundamentals GitHub Wiki
AWS Serverless Application Model (SAM) is an open-source framework designed to build, test, and deploy serverless applications on AWS. It extends AWS CloudFormation by providing shorthand syntax for defining Lambda functions, API Gateway, DynamoDB tables, and other serverless resources.
With AWS SAM, you can:
- Define serverless applications using a simplified YAML syntax.
- Build locally using the AWS SAM CLI (e.g., test Lambda, invoke APIs, debug).
- Deploy applications with built-in best practices.
- Use AWS SAM Accelerate for faster deployments.
Feature | AWS SAM | AWS CloudFormation |
---|---|---|
Purpose | Designed for serverless applications | Manages all AWS infrastructure |
Syntax | Simplified YAML with built-in serverless components | More verbose, requires manual setup for serverless |
Local Testing | Supports local testing/debugging using SAM CLI | No built-in local testing |
CLI Features | sam build, sam deploy, sam local invoke | Uses aws cloudformation deploy |
Transform | Uses AWS::Serverless-2016-10-31 transform to convert SAM templates into standard CloudFormation | No transformation required |
Deployment Speed | Faster for serverless apps using incremental changes | Slower due to full-stack deployments |
Extensibility | Can use CloudFormation resources inside a SAM template | Fully flexible for all AWS resources |
✅ Use AWS SAM when:
- Building serverless applications (Lambda, API Gateway, DynamoDB, Step Functions).
- Need local testing/debugging before deployment.
- Want a simplified YAML template for infrastructure.
- Need faster deployments with
sam accelerate
.
❌ Use CloudFormation when:
- Managing a full-stack AWS infrastructure (e.g., EC2, RDS, VPC, networking).
- Defining complex dependencies across AWS services.
- Working with hybrid architectures (serverless + traditional).
AWS SAM is CloudFormation for serverless applications with added benefits like local testing and simplified configuration. If you're building a serverless app, use SAM. If managing a broader AWS infrastructure, use CloudFormation.
When running AWS SAM locally, environment variables can be set in multiple places. The order of precedence determines which value will be used.
When resolving environment variables, AWS SAM follows this order:
Priority | Source | Takes Effect When |
---|---|---|
1️⃣ Highest | Manually exported variable (export / set) | You define it in the terminal before running sam local invoke |
2️⃣ | --env-vars env.json | You explicitly pass env.json when invoking the function |
3️⃣ | template.yaml (Environment.Variables) | Defined in template.yaml, used by sam local invoke and sam deploy |
4️⃣ Lowest | Default AWS values | If a variable isn’t set anywhere else, AWS uses default behavior |
✅ Manually exported variables (export/set) take top priority
✅ env.json
overrides template.yaml
when passed with --env-vars
✅ If nothing else is defined, template.yaml
values apply
✅ AWS defaults apply if no environment variables are explicitly set
To ensure consistency across environments:
-
Use
template.yaml
for deployment consistency - Use
env.json
for local testing - Use exported variables only for temporary overrides
Would you like help with loading multiple secrets dynamically in your Lambda function? 🔐🚀
If your Lambda function needs multiple secrets (e.g., MongoDB URI, Redis credentials, API keys), you can retrieve all of them at once from AWS Secrets Manager.
Instead of creating separate secrets, you can store them in a single JSON object under one secret.
Save your multiple secrets in key-value format:
{
"mongodb_uri": "mongodb://user:pass@your-cluster:27017/database",
"redis_host": "your-redis-endpoint",
"redis_port": "6379",
"api_key": "your-secure-api-key"
}
Give it a name, e.g., app/credentials
.
Modify your SAM template to grant Lambda access to Secrets Manager:
Resources:
MyLambdaFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: src/
Handler: app.handler
Runtime: nodejs18.x
MemorySize: 256
Timeout: 10
Policies:
- AWSSecretsManagerGetSecretValuePolicy:
SecretArn: !Sub "arn:aws:secretsmanager:${AWS::Region}:${AWS::AccountId}:secret:app/credentials-*"
Environment:
Variables:
SECRETS_MANAGER_ARN: !Sub "arn:aws:secretsmanager:${AWS::Region}:${AWS::AccountId}:secret:app/credentials"
npm install @aws-sdk/client-secrets-manager
const { SecretsManagerClient, GetSecretValueCommand } = require("@aws-sdk/client-secrets-manager");
const secretsClient = new SecretsManagerClient({ region: process.env.AWS_REGION || "us-east-1" });
const getSecrets = async () => {
try {
const secretArn = process.env.SECRETS_MANAGER_ARN;
const secret = await secretsClient.send(new GetSecretValueCommand({ SecretId: secretArn }));
const secretValues = JSON.parse(secret.SecretString);
return {
mongodbURI: secretValues.mongodb_uri,
redisHost: secretValues.redis_host,
redisPort: secretValues.redis_port,
apiKey: secretValues.api_key
};
} catch (err) {
console.error("❌ Error fetching secrets:", err);
throw err;
}
};
exports.handler = async () => {
try {
const secrets = await getSecrets();
console.log("✅ MongoDB URI:", secrets.mongodbURI);
console.log("✅ Redis Host:", secrets.redisHost);
console.log("✅ Redis Port:", secrets.redisPort);
console.log("✅ API Key:", secrets.apiKey);
return {
statusCode: 200,
body: JSON.stringify({ message: "Secrets Loaded Successfully" })
};
} catch (error) {
return {
statusCode: 500,
body: JSON.stringify({ error: "Failed to load secrets" })
};
}
};
sam build && sam deploy --guided
If testing locally, pass your AWS region:
sam local invoke MyLambdaFunction --env-vars env.json
Or manually:
export AWS_REGION=us-east-1
sam local invoke MyLambdaFunction
✅ Store multiple secrets in a single AWS Secrets Manager entry
✅ Grant IAM access in template.yaml
✅ Fetch all secrets dynamically at runtime
✅ Securely access MongoDB, Redis, and API keys in Lambda
Fetching secrets from AWS Secrets Manager every time can increase latency and cost. A better approach is to cache secrets in AWS ElastiCache (Redis) so that:
- Faster access (No repeated calls to Secrets Manager)
- Lower API call costs (Avoid frequent Secrets Manager requests)
- Improved Lambda cold start performance
Before proceeding, ensure you have:
✔️ A Redis cluster running in AWS ElastiCache
✔️ Lambda has access to Redis (Security Group + Subnet Config for VPC)
If you don’t have Redis, create one via AWS CLI:
aws elasticache create-cache-cluster --cluster-id my-redis-cluster --engine redis --cache-node-type cache.t3.micro --num-cache-nodes 1
Modify AWS SAM template.yaml
to allow Lambda to use Secrets Manager and Redis:
Resources:
MyLambdaFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: src/
Handler: app.handler
Runtime: nodejs18.x
MemorySize: 256
Timeout: 10
VpcConfig:
SecurityGroupIds:
- sg-1234567890abcdef0 # Update with your Redis Security Group
SubnetIds:
- subnet-abcdef1234567890 # Update with your VPC Subnet ID
Policies:
- AWSSecretsManagerGetSecretValuePolicy:
SecretArn: !Sub "arn:aws:secretsmanager:${AWS::Region}:${AWS::AccountId}:secret:app/credentials-*"
Environment:
Variables:
SECRETS_MANAGER_ARN: !Sub "arn:aws:secretsmanager:${AWS::Region}:${AWS::AccountId}:secret:app/credentials"
REDIS_HOST: "my-redis-cluster.xxxxxx.ng.0001.use1.cache.amazonaws.com"
REDIS_PORT: "6379"
Install Redis and AWS SDK:
npm install @aws-sdk/client-secrets-manager ioredis
This function will:
1️⃣ Check Redis first for cached secrets
2️⃣ If not found, fetch from AWS Secrets Manager
3️⃣ Store in Redis with an expiration time (e.g., 10 minutes)
const { SecretsManagerClient, GetSecretValueCommand } = require("@aws-sdk/client-secrets-manager");
const Redis = require("ioredis");
const secretsClient = new SecretsManagerClient({ region: process.env.AWS_REGION || "us-east-1" });
const redis = new Redis({
host: process.env.REDIS_HOST,
port: process.env.REDIS_PORT
});
const getSecrets = async () => {
try {
// 🔹 1. Check if secrets are already cached in Redis
const cachedSecrets = await redis.get("app:secrets");
if (cachedSecrets) {
console.log("✅ Using cached secrets from Redis");
return JSON.parse(cachedSecrets);
}
// 🔹 2. Fetch secrets from AWS Secrets Manager
console.log("⏳ Fetching secrets from Secrets Manager...");
const secretArn = process.env.SECRETS_MANAGER_ARN;
const secret = await secretsClient.send(new GetSecretValueCommand({ SecretId: secretArn }));
const secretValues = JSON.parse(secret.SecretString);
// 🔹 3. Cache secrets in Redis (Expire after 10 minutes)
await redis.set("app:secrets", JSON.stringify(secretValues), "EX", 600);
console.log("✅ Secrets cached in Redis");
return secretValues;
} catch (err) {
console.error("❌ Error fetching secrets:", err);
throw err;
}
};
exports.handler = async () => {
try {
const secrets = await getSecrets();
console.log("🔑 MongoDB URI:", secrets.mongodb_uri);
console.log("🔑 Redis Host:", secrets.redis_host);
console.log("🔑 Redis Port:", secrets.redis_port);
console.log("🔑 API Key:", secrets.api_key);
return {
statusCode: 200,
body: JSON.stringify({ message: "Secrets Loaded & Cached Successfully" })
};
} catch (error) {
return {
statusCode: 500,
body: JSON.stringify({ error: "Failed to load secrets" })
};
}
};
sam build && sam deploy --guided
sam local invoke MyLambdaFunction --env-vars env.json
After the first call, secrets should be cached in Redis.
Run again → secrets should now be fetched from Redis, not Secrets Manager.
✅ Fast secret access (fetch from Redis instead of Secrets Manager)
✅ Lower AWS API costs (fewer Secrets Manager requests)
✅ Secrets refreshed every 10 minutes
To clear the cache and force fresh secrets retrieval:
redis-cli -h my-redis-cluster.xxxxxx.ng.0001.use1.cache.amazonaws.com DEL app:secrets
If you've accidentally deleted the S3 bucket used by AWS SAM, you’ll need to create a new bucket and remap it to AWS SAM.
You can manually create an S3 bucket or use the AWS CLI:
aws s3 mb s3://my-new-sam-bucket --region us-east-1
🔹 Replace my-new-sam-bucket
with a unique bucket name.
🔹 Replace us-east-1
with your AWS region.
Modify your SAM configuration file (samconfig.toml
) to reference the new bucket.
🔹 Locate samconfig.toml
in your project.
🔹 Update the s3_bucket
value:
[default.deploy.parameters]
s3_bucket = "my-new-sam-bucket"
s3_prefix = "my-app"
region = "us-east-1"
capabilities = "CAPABILITY_IAM"
stack_name = "my-app-stack"
Alternatively, specify the new bucket during deployment:
sam deploy --s3-bucket my-new-sam-bucket
Run the following to rebuild and redeploy:
sam build
sam deploy --guided
This ensures that SAM now uses the new S3 bucket.
If AWS SAM was using a deleted bucket, you might encounter an error like:
🚨 "The specified bucket does not exist"
To fix this:
-
Check
.aws-sam
directory:rm -rf .aws-sam
-
Clear SAM cache (if needed):
sam cache clear
- Then rebuild & deploy again.
✔️ Created a new S3 bucket
✔️ Updated AWS SAM to use the new bucket
✔️ Rebuilt & deployed the application
read
To enable an AWS Lambda function inside a VPC to communicate with MongoDB Atlas, you need to set up a VPC Peering or Private Endpoint (recommended).
This approach is more secure and avoids internet exposure.
- Go to MongoDB Atlas Console → Select your Cluster.
- Click Network Access → Private Endpoint.
- Click Add Private Endpoint → Select AWS.
- Copy the VPC Endpoint Service Name from MongoDB.
- Go to AWS Console → VPC → Endpoints.
- Click Create Endpoint.
- Select AWS PrivateLink.
- Paste the MongoDB Atlas Service Name.
- Select your VPC and Subnet (Private Subnet used by Lambda).
- Choose Security Group that allows outbound access to MongoDB (port 27017).
- Go to AWS Console → EC2 → Security Groups.
- Find the Security Group of Lambda.
-
Allow outbound access to the VPC Endpoint:
- Protocol:
TCP
- Port:
27017
- Destination: VPC Endpoint IP or MongoDB CIDR
- Protocol:
Modify your Lambda function to connect using MongoDB Atlas Private IP:
import pymongo
import os
MONGO_URI = "mongodb+srv://your-private-endpoint.mongodb.net/mydb"
client = pymongo.MongoClient(MONGO_URI)
db = client.mydb
def lambda_handler(event, context):
data = db.collection.find_one()
return {"data": data}
If Private Endpoint is not an option, use VPC Peering.
- Go to MongoDB Atlas Console → Network Access.
- Click VPC Peering → Select AWS.
- Copy the VPC Peering Request ID.
- Go to AWS Console → VPC → Peering Connections.
- Find the MongoDB Peering Request → Click Accept.
- Go to AWS Console → Route Tables.
- Find the Route Table associated with your Lambda's VPC.
- Add a new route:
- Destination: MongoDB Atlas CIDR
- Target: VPC Peering ID
-
Allow outbound TCP on port
27017
to MongoDB Atlas CIDR.
✅ Lambda can now connect to MongoDB Atlas securely! 🎉