AWS ‐ SAM (Serverless Application Model) - FullstackCodingGuy/Developer-Fundamentals GitHub Wiki

What is AWS SAM?

AWS Serverless Application Model (SAM) is an open-source framework designed to build, test, and deploy serverless applications on AWS. It extends AWS CloudFormation by providing shorthand syntax for defining Lambda functions, API Gateway, DynamoDB tables, and other serverless resources.

With AWS SAM, you can:

  • Define serverless applications using a simplified YAML syntax.
  • Build locally using the AWS SAM CLI (e.g., test Lambda, invoke APIs, debug).
  • Deploy applications with built-in best practices.
  • Use AWS SAM Accelerate for faster deployments.

AWS SAM vs. AWS CloudFormation

Feature AWS SAM AWS CloudFormation
Purpose Designed for serverless applications Manages all AWS infrastructure
Syntax Simplified YAML with built-in serverless components More verbose, requires manual setup for serverless
Local Testing Supports local testing/debugging using SAM CLI No built-in local testing
CLI Features sam build, sam deploy, sam local invoke Uses aws cloudformation deploy
Transform Uses AWS::Serverless-2016-10-31 transform to convert SAM templates into standard CloudFormation No transformation required
Deployment Speed Faster for serverless apps using incremental changes Slower due to full-stack deployments
Extensibility Can use CloudFormation resources inside a SAM template Fully flexible for all AWS resources

When Should You Use AWS SAM?

Use AWS SAM when:

  • Building serverless applications (Lambda, API Gateway, DynamoDB, Step Functions).
  • Need local testing/debugging before deployment.
  • Want a simplified YAML template for infrastructure.
  • Need faster deployments with sam accelerate.

Use CloudFormation when:

  • Managing a full-stack AWS infrastructure (e.g., EC2, RDS, VPC, networking).
  • Defining complex dependencies across AWS services.
  • Working with hybrid architectures (serverless + traditional).

Conclusion

AWS SAM is CloudFormation for serverless applications with added benefits like local testing and simplified configuration. If you're building a serverless app, use SAM. If managing a broader AWS infrastructure, use CloudFormation.

✅ Environment Variable Precedence in AWS SAM (template.yaml vs env.json)

When running AWS SAM locally, environment variables can be set in multiple places. The order of precedence determines which value will be used.


🔹 Precedence Order (Highest to Lowest Priority)

When resolving environment variables, AWS SAM follows this order:

Priority Source Takes Effect When
1️⃣ Highest Manually exported variable (export / set) You define it in the terminal before running sam local invoke
2️⃣ --env-vars env.json You explicitly pass env.json when invoking the function
3️⃣ template.yaml (Environment.Variables) Defined in template.yaml, used by sam local invoke and sam deploy
4️⃣ Lowest Default AWS values If a variable isn’t set anywhere else, AWS uses default behavior

🔹 Key Takeaways

Manually exported variables (export/set) take top priority
env.json overrides template.yaml when passed with --env-vars
If nothing else is defined, template.yaml values apply
AWS defaults apply if no environment variables are explicitly set


🚀 Best Practice

To ensure consistency across environments:

  • Use template.yaml for deployment consistency
  • Use env.json for local testing
  • Use exported variables only for temporary overrides

Would you like help with loading multiple secrets dynamically in your Lambda function? 🔐🚀


✅ Dynamically Loading Multiple Secrets from AWS Secrets Manager in Lambda

If your Lambda function needs multiple secrets (e.g., MongoDB URI, Redis credentials, API keys), you can retrieve all of them at once from AWS Secrets Manager.


1️⃣ Store Multiple Secrets in AWS Secrets Manager

Instead of creating separate secrets, you can store them in a single JSON object under one secret.

Go to AWS Secrets Manager → Store a new secret

Save your multiple secrets in key-value format:

{
  "mongodb_uri": "mongodb://user:pass@your-cluster:27017/database",
  "redis_host": "your-redis-endpoint",
  "redis_port": "6379",
  "api_key": "your-secure-api-key"
}

Give it a name, e.g., app/credentials.


2️⃣ Update template.yaml to Reference the Secret

Modify your SAM template to grant Lambda access to Secrets Manager:

Resources:
  MyLambdaFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: src/
      Handler: app.handler
      Runtime: nodejs18.x
      MemorySize: 256
      Timeout: 10
      Policies:
        - AWSSecretsManagerGetSecretValuePolicy:
            SecretArn: !Sub "arn:aws:secretsmanager:${AWS::Region}:${AWS::AccountId}:secret:app/credentials-*"
      Environment:
        Variables:
          SECRETS_MANAGER_ARN: !Sub "arn:aws:secretsmanager:${AWS::Region}:${AWS::AccountId}:secret:app/credentials"

3️⃣ Fetch Multiple Secrets in Your Lambda Function

Install AWS SDK

npm install @aws-sdk/client-secrets-manager

Retrieve and Use Multiple Secrets in Lambda

const { SecretsManagerClient, GetSecretValueCommand } = require("@aws-sdk/client-secrets-manager");

const secretsClient = new SecretsManagerClient({ region: process.env.AWS_REGION || "us-east-1" });

const getSecrets = async () => {
  try {
    const secretArn = process.env.SECRETS_MANAGER_ARN;
    const secret = await secretsClient.send(new GetSecretValueCommand({ SecretId: secretArn }));
    const secretValues = JSON.parse(secret.SecretString);

    return {
      mongodbURI: secretValues.mongodb_uri,
      redisHost: secretValues.redis_host,
      redisPort: secretValues.redis_port,
      apiKey: secretValues.api_key
    };
  } catch (err) {
    console.error("❌ Error fetching secrets:", err);
    throw err;
  }
};

exports.handler = async () => {
  try {
    const secrets = await getSecrets();
    
    console.log("✅ MongoDB URI:", secrets.mongodbURI);
    console.log("✅ Redis Host:", secrets.redisHost);
    console.log("✅ Redis Port:", secrets.redisPort);
    console.log("✅ API Key:", secrets.apiKey);

    return {
      statusCode: 200,
      body: JSON.stringify({ message: "Secrets Loaded Successfully" })
    };
  } catch (error) {
    return {
      statusCode: 500,
      body: JSON.stringify({ error: "Failed to load secrets" })
    };
  }
};

4️⃣ Deploy & Run Locally

Deploy Lambda

sam build && sam deploy --guided

Run Locally with AWS SAM

If testing locally, pass your AWS region:

sam local invoke MyLambdaFunction --env-vars env.json

Or manually:

export AWS_REGION=us-east-1
sam local invoke MyLambdaFunction

🚀 Final Steps

Store multiple secrets in a single AWS Secrets Manager entry
Grant IAM access in template.yaml
Fetch all secrets dynamically at runtime
Securely access MongoDB, Redis, and API keys in Lambda


✅ Caching AWS Secrets Using ElastiCache (Redis) in Lambda

Fetching secrets from AWS Secrets Manager every time can increase latency and cost. A better approach is to cache secrets in AWS ElastiCache (Redis) so that:

  • Faster access (No repeated calls to Secrets Manager)
  • Lower API call costs (Avoid frequent Secrets Manager requests)
  • Improved Lambda cold start performance

1️⃣ Setup AWS ElastiCache for Redis

Before proceeding, ensure you have:
✔️ A Redis cluster running in AWS ElastiCache
✔️ Lambda has access to Redis (Security Group + Subnet Config for VPC)

If you don’t have Redis, create one via AWS CLI:

aws elasticache create-cache-cluster --cluster-id my-redis-cluster --engine redis --cache-node-type cache.t3.micro --num-cache-nodes 1

2️⃣ Update template.yaml to Grant Access to Redis

Modify AWS SAM template.yaml to allow Lambda to use Secrets Manager and Redis:

Resources:
  MyLambdaFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: src/
      Handler: app.handler
      Runtime: nodejs18.x
      MemorySize: 256
      Timeout: 10
      VpcConfig:
        SecurityGroupIds:
          - sg-1234567890abcdef0  # Update with your Redis Security Group
        SubnetIds:
          - subnet-abcdef1234567890  # Update with your VPC Subnet ID
      Policies:
        - AWSSecretsManagerGetSecretValuePolicy:
            SecretArn: !Sub "arn:aws:secretsmanager:${AWS::Region}:${AWS::AccountId}:secret:app/credentials-*"
      Environment:
        Variables:
          SECRETS_MANAGER_ARN: !Sub "arn:aws:secretsmanager:${AWS::Region}:${AWS::AccountId}:secret:app/credentials"
          REDIS_HOST: "my-redis-cluster.xxxxxx.ng.0001.use1.cache.amazonaws.com"
          REDIS_PORT: "6379"

3️⃣ Install Required Packages

Install Redis and AWS SDK:

npm install @aws-sdk/client-secrets-manager ioredis

4️⃣ Implement Secret Caching in Lambda

This function will:
1️⃣ Check Redis first for cached secrets
2️⃣ If not found, fetch from AWS Secrets Manager
3️⃣ Store in Redis with an expiration time (e.g., 10 minutes)

Lambda Function (app.js)

const { SecretsManagerClient, GetSecretValueCommand } = require("@aws-sdk/client-secrets-manager");
const Redis = require("ioredis");

const secretsClient = new SecretsManagerClient({ region: process.env.AWS_REGION || "us-east-1" });

const redis = new Redis({
  host: process.env.REDIS_HOST,
  port: process.env.REDIS_PORT
});

const getSecrets = async () => {
  try {
    // 🔹 1. Check if secrets are already cached in Redis
    const cachedSecrets = await redis.get("app:secrets");
    if (cachedSecrets) {
      console.log("✅ Using cached secrets from Redis");
      return JSON.parse(cachedSecrets);
    }

    // 🔹 2. Fetch secrets from AWS Secrets Manager
    console.log("⏳ Fetching secrets from Secrets Manager...");
    const secretArn = process.env.SECRETS_MANAGER_ARN;
    const secret = await secretsClient.send(new GetSecretValueCommand({ SecretId: secretArn }));
    const secretValues = JSON.parse(secret.SecretString);

    // 🔹 3. Cache secrets in Redis (Expire after 10 minutes)
    await redis.set("app:secrets", JSON.stringify(secretValues), "EX", 600);

    console.log("✅ Secrets cached in Redis");
    return secretValues;
  } catch (err) {
    console.error("❌ Error fetching secrets:", err);
    throw err;
  }
};

exports.handler = async () => {
  try {
    const secrets = await getSecrets();

    console.log("🔑 MongoDB URI:", secrets.mongodb_uri);
    console.log("🔑 Redis Host:", secrets.redis_host);
    console.log("🔑 Redis Port:", secrets.redis_port);
    console.log("🔑 API Key:", secrets.api_key);

    return {
      statusCode: 200,
      body: JSON.stringify({ message: "Secrets Loaded & Cached Successfully" })
    };
  } catch (error) {
    return {
      statusCode: 500,
      body: JSON.stringify({ error: "Failed to load secrets" })
    };
  }
};

5️⃣ Deploy & Test

Deploy Lambda

sam build && sam deploy --guided

Run Locally

sam local invoke MyLambdaFunction --env-vars env.json

Verify Redis Caching

After the first call, secrets should be cached in Redis.
Run again → secrets should now be fetched from Redis, not Secrets Manager.


🚀 Final Optimization

Fast secret access (fetch from Redis instead of Secrets Manager)
Lower AWS API costs (fewer Secrets Manager requests)
Secrets refreshed every 10 minutes


🔹 Bonus: Force Cache Refresh

To clear the cache and force fresh secrets retrieval:

redis-cli -h my-redis-cluster.xxxxxx.ng.0001.use1.cache.amazonaws.com DEL app:secrets

🚨 Recovering from Accidental S3 Bucket Deletion in AWS SAM

If you've accidentally deleted the S3 bucket used by AWS SAM, you’ll need to create a new bucket and remap it to AWS SAM.


1️⃣ Create a New S3 Bucket

You can manually create an S3 bucket or use the AWS CLI:

aws s3 mb s3://my-new-sam-bucket --region us-east-1

🔹 Replace my-new-sam-bucket with a unique bucket name.
🔹 Replace us-east-1 with your AWS region.


2️⃣ Update AWS SAM to Use the New Bucket

Modify your SAM configuration file (samconfig.toml) to reference the new bucket.

🔹 Locate samconfig.toml in your project.
🔹 Update the s3_bucket value:

[default.deploy.parameters]
s3_bucket = "my-new-sam-bucket"
s3_prefix = "my-app"
region = "us-east-1"
capabilities = "CAPABILITY_IAM"
stack_name = "my-app-stack"

Alternatively, specify the new bucket during deployment:

sam deploy --s3-bucket my-new-sam-bucket

3️⃣ Rebuild and Deploy AWS SAM

Run the following to rebuild and redeploy:

sam build
sam deploy --guided

This ensures that SAM now uses the new S3 bucket.


4️⃣ Optional: Clean Up Old References

If AWS SAM was using a deleted bucket, you might encounter an error like:

🚨 "The specified bucket does not exist"

To fix this:

  • Check .aws-sam directory:
    rm -rf .aws-sam
  • Clear SAM cache (if needed):
    sam cache clear
  • Then rebuild & deploy again.

✅ Summary

✔️ Created a new S3 bucket
✔️ Updated AWS SAM to use the new bucket
✔️ Rebuilt & deployed the application

Establishing or Troubleshooting Lambda to Mongodb Atlas Connectivity

read

To enable an AWS Lambda function inside a VPC to communicate with MongoDB Atlas, you need to set up a VPC Peering or Private Endpoint (recommended).


Solution 1: Use MongoDB Atlas Private Endpoint (Recommended)

This approach is more secure and avoids internet exposure.

Step 1: Enable Private Link in MongoDB Atlas

  1. Go to MongoDB Atlas Console → Select your Cluster.
  2. Click Network AccessPrivate Endpoint.
  3. Click Add Private Endpoint → Select AWS.
  4. Copy the VPC Endpoint Service Name from MongoDB.

Step 2: Create a VPC Endpoint in AWS

  1. Go to AWS Console → VPC → Endpoints.
  2. Click Create Endpoint.
  3. Select AWS PrivateLink.
  4. Paste the MongoDB Atlas Service Name.
  5. Select your VPC and Subnet (Private Subnet used by Lambda).
  6. Choose Security Group that allows outbound access to MongoDB (port 27017).

Step 3: Modify Lambda Security Group

  1. Go to AWS Console → EC2 → Security Groups.
  2. Find the Security Group of Lambda.
  3. Allow outbound access to the VPC Endpoint:
    • Protocol: TCP
    • Port: 27017
    • Destination: VPC Endpoint IP or MongoDB CIDR

Step 4: Update Lambda Code to Connect

Modify your Lambda function to connect using MongoDB Atlas Private IP:

import pymongo
import os

MONGO_URI = "mongodb+srv://your-private-endpoint.mongodb.net/mydb"
client = pymongo.MongoClient(MONGO_URI)
db = client.mydb

def lambda_handler(event, context):
    data = db.collection.find_one()
    return {"data": data}

Solution 2: Use VPC Peering (Alternative)

If Private Endpoint is not an option, use VPC Peering.

Step 1: Create a VPC Peering Connection

  1. Go to MongoDB Atlas ConsoleNetwork Access.
  2. Click VPC Peering → Select AWS.
  3. Copy the VPC Peering Request ID.

Step 2: Accept the Peering Request in AWS

  1. Go to AWS Console → VPC → Peering Connections.
  2. Find the MongoDB Peering Request → Click Accept.

Step 3: Update Route Tables

  1. Go to AWS Console → Route Tables.
  2. Find the Route Table associated with your Lambda's VPC.
  3. Add a new route:
    • Destination: MongoDB Atlas CIDR
    • Target: VPC Peering ID

Step 4: Modify Lambda Security Group

  1. Allow outbound TCP on port 27017 to MongoDB Atlas CIDR.

Final Check

Lambda can now connect to MongoDB Atlas securely! 🎉

⚠️ **GitHub.com Fallback** ⚠️