SES SQS SNS RDS EC2 S3 Cloudfront Lambda - sgml/signature GitHub Wiki

Overview

MVC

          +----------------------+     +----------------------+
          |   Client Request     |     |     API Gateway      |
          |  (HTTP POST/GET)     |     |  (REST or HTTP API)  |
          +----------+-----------+     +----------+-----------+
                     |                         |
                     +-----------+-------------+
                                 |
                                 v
                        +----------------------+
                        |      Lambda          |
                        |  (Ephemeral Runtime) |
                        +----------+-----------+
                                   |
                +------------------+------------------+
                |                                     |
                v                                     v
      +------------------+               +------------------------+
      | Check /tmp/sqlite|               |  Recreate SQLite DB    |
      |   exists?        |               |  (if missing or stale) |
      +--------+---------+               +-----------+------------+
               |                                     |
               v                                     v
      +------------------------+         +------------------------+
      |  Write/Read to SQLite  |<--------|  Init schema + tables  |
      |  in /tmp/sqlite.db     |         |  (CREATE TABLE etc.)   |
      +------------------------+         +------------------------+
               |
               v
      +------------------------+
      |  Return response to    |
      |  API Gateway           |
      +------------------------+
               |
               v
      +------------------------+
      |  Response to Client    |
      +------------------------+

AWS for SpringMVC Developers

| AWS Service            | Spring MVC Equivalent              |
|------------------------|------------------------------------|
| EC2 Instance           | Application Host / Tomcat Server   |
| RDS (e.g., MySQL)      | Spring Data JPA + MySQL            |
| EFS                    | Shared File Storage (e.g., NFS)    |
| VPC                    | Network Boundary / DMZ             |
| Subnets                | Deployment Zones / Availability    |
| ELB (Load Balancer)    | Reverse Proxy / DispatcherServlet  |
| Security Groups        | Firewall Rules / Access Filters    |
| IAM Roles              | Spring Security Roles              |
| CloudWatch             | Actuator + Logging Frameworks      |
| Elastic Beanstalk      | Spring Boot Deployment Automation  |

Status Pages

Feature Parity

  • Both OLAP and OLTP systems typically use structured data, organized in tables with rows and columns. Data warehouses also rely heavily on structured data to facilitate complex queries and reporting.

  • Data warehouses and OLAP systems often use a schema-on-write approach, where the schema is defined before data is written, ensuring data consistency and integrity.

  • Data warehouses, OLAP systems, and data lakes often use ETL (Extract, Transform, Load) processes to integrate data from various sources. This ensures that data is cleaned, transformed, and loaded into the target system in a consistent format¹.

  • Data warehouses, OLAP systems, and data lakes serve as centralized repositories for large volumes of data, enabling efficient data management and retrieval².

  • Object stores and data lakes are designed to handle massive amounts of data, providing scalable storage solutions that can grow with the organization's needs³.

  • All these systems implement access controls to ensure that only authorized users can access or modify the data. This is crucial for maintaining data security and compliance¹.

  • While each system has its unique features and use cases, these similarities in schema design help ensure that data is organized, integrated, stored, accessed, and secured efficiently across different platforms.

Lifecycle

Migration

Sitemap Scraping

Query Syntax

Design Patterns

Failover / Disaster Recovery

Practice Tests

Best Practices

Glossary

Training

Step Functions

ARNs Dereferencing

Scheduling Cron Jobs

+-------------------+        +-------------------+        +-------------------+
|                   |        |                   |        |                   |
|   EventBridge     |        |   Lambda Function |        |   API Gateway     |
|   (Cron Job)      |        |                   |        |                   |
|                   |        |                   |        |                   |
|   +------------+  |        |   +------------+  |        |   +------------+  |
|   |            |  |        |   |            |  |        |   |            |  |
|   |  Schedule  |  |        |   |  Execute   |  |        |   |  Receive   |  |
|   | Expression |  |        |   |  Business  |  |        |   |  HTTP      |  |
|   | (e.g.,     |  |        |   |  Logic     |  |        |   |  Request   |  |
|   |  "cron(0   |  |        |   |            |  |        |   |            |  |
|   |  * * * ?") |  |        |   |            |  |        |   |            |  |
|   |            |  |        |   |            |  |        |   |            |  |
|   +------------+  |        |   +------------+  |        |   +------------+  |
|                   |        |                   |        |                   |
+-------------------+        +-------------------+        +-------------------+

Data Quality Assurance

         [Job Postings] -----> [Data Extraction] -----> [Technology Mapping]
             |                                             |
             V                                             V
  [Company Websites] -----> [Entity Resolution] -----> [Normalization]
             |                                             |
             V                                             V
       [API Sources] -----> [Daily Quality Assurance] -----> [Dataset Updates]
             |                                             |
             V                                             V
         [User Access] <----- [Historical Records] <----- [Verification]

Cost

Template Validation

Goal Tool/Method Notes
YAML syntax validation yamllint Fast, local YAML linting
Validate Route 53 DNS records aws route53, Route 53 Console List, test, and monitor hosted zones and records
CloudFormation template validation cfn-lint Simulates EB backend template parsing

Permissions

Performance

Emulation

testing_serverless_code:
  options:
    - name: OpenFaaS
      url: https://www.openfaas.com
      description: This open-source platform allows you to build, deploy, and test serverless functions using Docker containers. It works seamlessly on Windows.
    - name: VirtualBox
      url: https://www.virtualbox.org
      description: Create a Linux virtual machine to emulate your production environment and test your serverless code.
    - name: VMware
      url: https://www.vmware.com
      description: Create a Linux virtual machine to emulate your production environment and test your serverless code.
    - name: LocalStack
      url: https://localstack.cloud
      description: Emulates a complete cloud stack locally, allowing you to run and test various cloud services on your Windows machine.
    - name: Fn Project
      url: https://fnproject.io
      description: An open-source container-native serverless platform that can be run locally using Docker. It supports multiple languages and offers a flexible runtime.

Testing

CLI

Marketplace

Product Advertising

SMTP

SQS

SQLite / S3Select

DynamoDB

Use Cases

Use Case Description
Session Management Storing temporary user sessions with TTL for automatic expiration.
Caching Layer Using DynamoDB Accelerator (DAX) for low-latency caching.
Event-Driven Data Holding transient event logs or messages before processing in AWS Lambda or Kinesis.
Temporary Analytics Storing short-lived analytical data for real-time dashboards or streaming insights.
Job Queues Managing ephemeral task queues for distributed processing or serverless workflows.
User Profiles Storing persistent user data for authentication and personalization.
Product Catalogs Managing structured product information for e-commerce platforms.
Financial Transactions Keeping records of purchases, payments, and account balances.
IoT Data Storage Retaining sensor data for analytics and historical tracking.
Gaming State Management Storing player progress, inventory, and leaderboard data.
Audit Logs Maintaining compliance-related logs for security and tracking.

SES

SNS

IAM

OWASP Best Practices

Platform Admin Console URL Decoupled 2FA – Risks (Keyman Anti‑Patterns) Decoupled 2FA – Mitigations Decoupled Username – Risks (Keyman Anti‑Patterns) Decoupled Username – Mitigations OWASP Best Practices
Atlassian Admin https://admin.atlassian.com - Only one admin registers hardware key
- Backup codes stored by a single person
- SMS tied to one phone number of a key employee
- Require multiple admins to register hardware keys
- Store backup codes in shared secure vault
- Prefer TOTP/WebAuthn over SMS
- Using personal email addresses as usernames
- Single shared admin account tied to one email
- No external IdP → lock‑in to Atlassian email identity
- Enforce Atlassian Guard with IdP federation
- Require unique Atlassian Account IDs
- Use system‑assigned usernames, email only for contact
- OWASP ASVS 2.1: Multi‑factor authentication must support multiple methods
- OWASP ASVS 4.1: Do not use email addresses as primary identifiers
- OWASP Top 10 A07: Ensure proper identity and access management
Google Cloud Admin https://console.cloud.google.com / https://admin.google.com - Sole reliance on one super‑admin’s phone for 2FA
- Recovery codes stored in personal Google Drive
- No secondary factors registered for other admins
- Enforce multiple super‑admins with distinct factors
- Store recovery codes in enterprise password manager
- Require secondary factors for all admins
- Using Gmail addresses of individuals as canonical usernames
- No federation with IdP → identity bound to personal email
- Shared “admin@domain” account without rotation
- Federate identity via IdP (Azure AD, Okta)
- Use employee IDs or opaque handles as usernames
- Rotate shared admin accounts into individual accounts
- OWASP ASVS 2.2: Ensure recovery mechanisms are secure and not tied to one person
- OWASP ASVS 4.2: Support federated identity providers
- OWASP Top 10 A05: Implement proper security configuration
AWS Admin https://console.aws.amazon.com / https://console.aws.amazon.com/iam - Root account MFA tied to one device
- Hardware token held by a single operator
- No delegation of IAM roles → one person controls all
- Register multiple MFA devices for root
- Issue hardware tokens to multiple admins
- Delegate IAM roles instead of relying on root
- IAM usernames tied to personal emails
- Root account email controlled by one person
- Lack of aliasing → operational identity bound to individual email
- Use IAM usernames independent of email
- Assign root account recovery to shared admin group
- Implement aliases or employee IDs for operational identity
- OWASP ASVS 2.3: Root accounts must have multiple MFA options
- OWASP ASVS 4.3: Separate operational identity from contact info
- OWASP Top 10 A01: Broken access control prevention via role delegation

OIDC

(lambda-invoke-workflow
  ;; name: Secure Lambda Invocation
  ;; on:
  ;;   workflow_dispatch:

  (workflow
    (name "Secure Lambda Invocation")
    (trigger workflow_dispatch))

  ;; jobs:
  ;;   invoke-lambda:
  ;;     runs-on: ubuntu-latest
  ;;     permissions:
  ;;       id-token: write
  ;;       contents: read

  (job invoke-lambda
    (runner ubuntu-latest)
    (permissions
      (id-token write)
      (contents read))

    ;; steps:
    ;;   - name: Configure AWS credentials via OIDC
    ;;     uses: aws-actions/configure-aws-credentials@v2
    ;;     with:
    ;;       role-to-assume: arn:aws:iam::123456789012:role/GitHubOIDCRole
    ;;       aws-region: us-west-2

    (step configure-aws-credentials
      (uses aws-actions/configure-aws-credentials@v2)
      (with
        (role-to-assume "arn:aws:iam::123456789012:role/GitHubOIDCRole")
        (aws-region "us-west-2")))

    ;;   - name: Invoke Lambda
    ;;     run: |
    ;;       aws lambda invoke \
    ;;         --function-name my-lambda-function \
    ;;         --payload '{"key": "value"}' \
    ;;         response.json
    ;;       cat response.json

    (step invoke-lambda
      (run
        (aws lambda invoke
          (function-name "my-lambda-function")
          (payload "{\"key\": \"value\"}")
          (output "response.json"))
        (cat "response.json")))))

;; IAM Role Setup (for reference)

(iam-role GitHubOIDCRole
  ;; Trust Policy
  ;; {
  ;;   "Version": "2012-10-17",
  ;;   "Statement": [
  ;;     {
  ;;       "Effect": "Allow",
  ;;       "Principal": {
  ;;         "Federated": "arn:aws:iam::123456789012:oidc-provider/token.actions.githubusercontent.com"
  ;;       },
  ;;       "Action": "sts:AssumeRoleWithWebIdentity",
  ;;       "Condition": {
  ;;         "StringEquals": {
  ;;           "token.actions.githubusercontent.com:aud": "sts.amazonaws.com",
  ;;           "token.actions.githubusercontent.com:sub": "repo:your-org/your-repo:ref:refs/heads/main"
  ;;         }
  ;;       }
  ;;     }
  ;;   ]
  ;; }

  (trust-policy
    (principal
      (federated "arn:aws:iam::123456789012:oidc-provider/token.actions.githubusercontent.com"))
    (action sts:AssumeRoleWithWebIdentity)
    (condition
      (StringEquals
        ("aud" "sts.amazonaws.com")
        ("sub" "repo:your-org/your-repo:ref:refs/heads/main"))))

  ;; Permissions Policy
  ;; {
  ;;   "Version": "2012-10-17",
  ;;   "Statement": [
  ;;     {
  ;;       "Effect": "Allow",
  ;;       "Action": "lambda:InvokeFunction",
  ;;       "Resource": "arn:aws:lambda:us-west-2:123456789012:function:my-lambda-function"
  ;;     }
  ;;   ]
  ;; }

  (permissions-policy
    (action lambda:InvokeFunction)
    (resource "arn:aws:lambda:us-west-2:123456789012:function:my-lambda-function")))

WAF

VPC

+-------------------------------------------------------------+
|                           VPC                               |
|                                                             |
|  +-------------------+     +-------------------+            |
|  |   Public Subnet   |     |   Private Subnet  |            |
|  |  (ELB, NAT GW)    |     | (EC2, RDS, EFS)   |            |
|  +-------------------+     +-------------------+            |
|         |                         |                         |
|         |                         |                         |
|   +-------------+         +------------------+              |
|   | Elastic     |         | Elastic Beanstalk|              |
|   | Load Balancer|        | EC2 Instances     |             |
|   +-------------+         +------------------+              |
|         |                         |                         |
|         |                         |                         |
|   +-------------------------------+                         |
|   |       Security Group(s)       |                         |
|   |  - ELB SG (ingress: 80/443)   |                         |
|   |  - EC2 SG (ingress: from ELB) |                         |
|   |  - EFS SG (ingress: 2049)     |                         |
|   +-------------------------------+                         |
|                                                             |
+-------------------------------------------------------------+

Relationship Breakdown:
-----------------------
VPC:
  - Encloses all networking components and resources.

Subnets:
  - Public Subnet: Hosts ELB and NAT Gateway.
  - Private Subnet: Hosts EC2 (Beanstalk), RDS, and EFS.

Security Groups:
  - ELB SG: Allows inbound HTTP/HTTPS traffic from the internet.
  - EC2 SG: Allows inbound traffic from ELB SG.
  - EFS SG: Allows NFS traffic (port 2049) from EC2 SG.

Elastic Beanstalk:
  - Deploys EC2 instances into private subnets.
  - Attaches EC2 SG and optionally mounts EFS volumes.
  - Can configure ELB in public subnet to route traffic to EC2.

Elastic Load Balancer:
  - Public-facing; forwards traffic to EC2 instances in private subnet.
  - Associated with ELB SG.

EFS (optional):
  - Mount targets created in private subnets.
  - Secured via EFS SG allowing access from EC2 SG.

Lambda

Limitations

Languages with Elixir‑like AWS Lambda Limitations

Language Runtime Type Why Same Limitations Apply
Elixir BEAM (Erlang VM) No official runtime; requires custom runtime; heavy cold starts; complex packaging; limited tooling.
Erlang BEAM (Erlang VM) Same as Elixir — BEAM VM startup overhead; custom runtime required; sparse Lambda tooling.
OCaml Compiled No native runtime; requires custom build and packaging; sparse community support.
Crystal Compiled Compiles to native binaries; no official runtime; manual packaging and deployment scripts needed.
Swift Compiled Requires custom runtime; packaging into Lambda layers is complex; limited official support.
Java JVM Official runtime exists, but JVM cold starts are significant; packaging JARs is complex.
Scala JVM Inherits JVM cold start and packaging issues; limited Lambda‑specific tooling.
Groovy JVM Runs on JVM; same cold start overhead; requires JAR packaging; not officially supported.
Clojure JVM Requires custom runtime; JVM cold starts; packaging complexity; limited Lambda tooling.
Objective‑C Compiled No official runtime; requires custom runtime and manual packaging; sparse documentation.
C# .NET CLR Official runtime exists, but .NET cold starts are significant; packaging DLLs is complex.
F# .NET CLR Same as C# — runs on .NET; cold start overhead; packaging complexity; limited Lambda‑specific tooling.

RDS

Deprecation

Continuous Deployments

Blue/Green Deployments

Schema Migration

<!-- Step 1: Clone Blue Database to Create Green Database -->
<xsl:template name="clone-database">
    <xsl:param name="blueDatabase" />
    <xsl:variable name="greenDatabase">
        <xsl:copy-of select="$blueDatabase" />
    </xsl:variable>
</xsl:template>

<!-- Step 2: Apply Schema Changes to Green Database -->
<xsl:template name="apply-schema-changes">
    <xsl:param name="greenDatabase" />
    <xsl:variable name="updatedGreenDatabase">
        <!-- Define schema changes -->
        <xsl:copy>
            <!-- Add or modify tables, fields, etc. -->
            <table name="NewTable">
                <field name="NewField" type="varchar" />
            </table>
        </xsl:copy>
    </xsl:variable>
</xsl:template>

<!-- Step 3: Validate and Test Green Database -->
<xsl:template name="validate-database">
    <xsl:param name="greenDatabase" />
    <xsl:choose>
        <xsl:when test="validation($greenDatabase)">
            <xsl:text>Validation Successful</xsl:text>
        </xsl:when>
        <xsl:otherwise>
            <xsl:text>Validation Failed</xsl:text>
        </xsl:otherwise>
    </xsl:choose>
</xsl:template>

<!-- Step 4: Promote Green Database to Production -->
<xsl:template name="promote-to-production">
    <xsl:param name="greenDatabase" />
    <xsl:variable name="productionDatabase" select="$greenDatabase" />
</xsl:template>

<!-- Step 5: Monitor Green Database and Roll Back if Needed -->
<xsl:template name="monitor-and-rollback">
    <xsl:param name="greenDatabase" />
    <xsl:param name="blueDatabase" />
    <xsl:choose>
        <xsl:when test="monitor($greenDatabase) = 'issues_detected'">
            <xsl:variable name="activeDatabase" select="$blueDatabase" />
            <xsl:text>Rollback Completed</xsl:text>
        </xsl:when>
        <xsl:otherwise>
            <xsl:text>Migration Successful</xsl:text>
        </xsl:otherwise>
    </xsl:choose>
</xsl:template>

Upgrade

stages:
  - stage: "Upgrade from PostgreSQL 11 to PostgreSQL 12"
    steps:
      - step: "Preparation and Backup"
        description: >
          Identify the current production RDS instance (blue environment) running PostgreSQL 11 and take a snapshot for a reliable rollback option.
        url: "https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_CommonTasks.BackupRestore.html"
      - step: "Compatibility Review"
        description: >
          Review AWS documentation and PostgreSQL 12 release notes to confirm that your application, extensions, and configurations are supported.
        url: "https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Upgrading.html"
      - step: "Initiate Blue/Green Deployment"
        description: >
          Use the RDS Blue/Green Deployments feature to create a green environment that mirrors your current production (blue) instance.
        url: "https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/blue-green-deployments-overview.html"
      - step: "Apply Upgrade in Green Environment"
        description: >
          In the green environment, trigger the major engine upgrade from PostgreSQL 11 to PostgreSQL 12 and adjust parameter groups if necessary.
        url: "https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Upgrading.html"
      - step: "Testing and Validation"
        description: >
          Run functional, integration, and performance tests using AWS monitoring tools to validate that PostgreSQL 12 is operating as expected.
        url: "https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Monitoring.html"
      - step: "Switchover to Production"
        description: >
          Execute the blue/green switchover to route production traffic to the upgraded PostgreSQL 12 (green) environment with minimal downtime.
        url: "https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/blue-green-deployments-overview.html"

  - stage: "Upgrade from PostgreSQL 12 to Latest Version"
    steps:
      - step: "Preparation and Snapshot"
        description: >
          With PostgreSQL 12 now in production, create a fresh snapshot to serve as a rollback baseline before initiating the next upgrade.
        url: "https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_CommonTasks.BackupRestore.html"
      - step: "Review New Version Compatibility"
        description: >
          Verify that your application and extensions are compatible with the latest PostgreSQL version by reviewing release notes and AWS documents.
        url: "https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Upgrading.html"
      - step: "Initiate Second Blue/Green Deployment"
        description: >
          Trigger a new blue/green deployment from the PostgreSQL 12 environment to create a fresh green environment for the upgrade.
        url: "https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/blue-green-deployments-overview.html"
      - step: "Apply Upgrade in the Second Green Environment"
        description: >
          Upgrade the green environment from PostgreSQL 12 to the latest PostgreSQL version available, updating parameter groups as required.
        url: "https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Upgrading.html"
      - step: "Comprehensive Testing"
        description: >
          Conduct comprehensive integration, functional, and performance tests to ensure the upgraded environment meets all operational requirements.
        url: "https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PerfInsights.html"
      - step: "Final Switchover"
        description: >
          Complete the final switchover by routing production traffic to the green environment running the latest PostgreSQL version.
        url: "https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/blue-green-deployments-overview.html"

References

EC2

Wordpress Setup

def create_instance_from_ami_vpc(self, ami_id, key_name, instance_type='t2.micro'):
    """
    Finds an existing instance launched from the given AMI ID,
    retrieves its VPC ID, and launches a new instance in the same VPC.
    
    :param ami_id: AMI ID to base the new instance on
    :param key_name: Key pair name
    :param instance_type: EC2 instance type
    :return: New instance ID or None
    """
    try:
        # Step 1: Find an instance using the AMI
        response = self.ec2_client.describe_instances(
            Filters=[{'Name': 'image-id', 'Values': [ami_id]}]
        )
        reservations = response.get('Reservations', [])
        if not reservations:
            print(f"No instances found using AMI {ami_id}")
            return None

        instance = reservations[0]['Instances'][0]
        vpc_id = instance['VpcId']
        print(f"Found VPC ID {vpc_id} from instance {instance['InstanceId']}")

        # Step 2: Find a subnet in that VPC
        subnets = self.ec2_client.describe_subnets(
            Filters=[{'Name': 'vpc-id', 'Values': [vpc_id]}]
        )
        subnet_id = subnets['Subnets'][0]['SubnetId']

        # Step 3: Launch new instance
        new_instance = self.ec2_client.run_instances(
            ImageId=ami_id,
            InstanceType=instance_type,
            KeyName=key_name,
            SubnetId=subnet_id,
            MinCount=1,
            MaxCount=1
        )
        new_instance_id = new_instance['Instances'][0]['InstanceId']
        print(f"Launched new instance {new_instance_id} in VPC {vpc_id}")
        return new_instance_id

    except Exception as e:
        print(f"Error creating instance from AMI VPC: {e}")
        return None
import boto3
import subprocess
import os
from crontab import CronTab

class WordPressProvisioner:
    def __init__(self, region="us-west-2", key_name="wordpress-key", secret_name="wordpress-ec2-key", notification_email="[email protected]"):
        self.region = region
        self.key_name = key_name
        self.secret_name = secret_name
        self.notification_email = notification_email
        self.pem_content = None
        self.instance_id = None
        self.ec2 = boto3.client("ec2", region_name=self.region)
        self.secrets = boto3.client("secretsmanager", region_name=self.region)
        self.cloudwatch = boto3.client("cloudwatch", region_name=self.region)
        self.sns = boto3.client("sns", region_name=self.region)
        self.topic_arn = None

    # Step 0: Create EC2 instance and store PEM + instance ID
    def create_ec2_instance(self):
        key_pair = self.ec2.create_key_pair(KeyName=self.key_name)
        self.pem_content = key_pair["KeyMaterial"]

        instance = self.ec2.run_instances(
            ImageId="ami-0c55b159cbfafe1f0",
            InstanceType="t2.micro",
            KeyName=self.key_name,
            MinCount=1,
            MaxCount=1,
            SecurityGroups=["default"]
        )
        self.instance_id = instance["Instances"][0]["InstanceId"]
        print("EC2 instance created:", self.instance_id)

    # Step 1: Store PEM in Secrets Manager
    def store_pem_in_secrets_manager(self):
        if not self.pem_content:
            raise ValueError("PEM key not found in memory. Run create_ec2_instance() first.")
        try:
            self.secrets.create_secret(Name=self.secret_name, SecretString=self.pem_content)
        except self.secrets.exceptions.ResourceExistsException:
            self.secrets.put_secret_value(SecretId=self.secret_name, SecretString=self.pem_content)
        print("PEM key stored in Secrets Manager as:", self.secret_name)

    # Step 2: Instructions to SSH and install WordPress
    def install_wordpress(self):
        """
        Manual Instructions:
        1. Retrieve PEM from Secrets Manager:
           aws secretsmanager get-secret-value --secret-id wordpress-ec2-key --query 'SecretString' --output text > temp.pem
           chmod 400 temp.pem

        2. Get EC2 public IP:
           aws ec2 describe-instances --instance-ids <instance_id> --query 'Reservations[0].Instances[0].PublicIpAddress' --output text

        3. SSH into EC2:
           ssh -i temp.pem ec2-user@<public_ip>

        4. Install WordPress manually on EC2.
        """

    # Step 3: Instructions to update WordPress using plugin
    def generate_update_script(self):
        """
        Manual Instructions:
        1. Log into WordPress admin.
        2. Install "Easy Updates Manager" plugin.
        3. Enable automatic updates for core, plugins, and themes.
        """

    # Step 4: Create weekly cron job (optional)
    def create_cron_job(self):
        cron = CronTab(user=True)
        job = cron.new(command=f"bash {os.path.abspath('update_wordpress.sh')}", comment="Weekly WordPress Update")
        job.setall("0 3 * * 1")
        cron.write()
        print("Cron job scheduled for weekly WordPress update")

    # Step 5: Terminate EC2 instance
    def terminate_ec2_instance(self):
        if not self.instance_id:
            raise ValueError("Instance ID not set. Run create_ec2_instance() first.")
        self.ec2.terminate_instances(InstanceIds=[self.instance_id])
        print("EC2 instance termination initiated:", self.instance_id)

    # Step 6: Create CloudWatch alarm for EC2 CPU usage
    def create_cloudwatch_alarm(self):
        alarm_name = f"HighCPU-{self.instance_id}"
        self.cloudwatch.put_metric_alarm(
            AlarmName=alarm_name,
            ComparisonOperator="GreaterThanThreshold",
            EvaluationPeriods=1,
            MetricName="CPUUtilization",
            Namespace="AWS/EC2",
            Period=300,
            Statistic="Average",
            Threshold=70.0,
            ActionsEnabled=True,
            AlarmActions=[self.topic_arn] if self.topic_arn else [],
            AlarmDescription="Alarm when CPU exceeds 70 percent",
            Dimensions=[{"Name": "InstanceId", "Value": self.instance_id}]
        )
        print("CloudWatch alarm created:", alarm_name)

    # Step 7: Create SNS topic and subscribe email
    def setup_sns_notifications(self):
        topic = self.sns.create_topic(Name="WordPressAlerts")
        self.topic_arn = topic["TopicArn"]

        self.sns.subscribe(
            TopicArn=self.topic_arn,
            Protocol="email",
            Endpoint=self.notification_email
        )
        print("SNS topic created and email subscription sent to:", self.notification_email)
        print("Note: Confirm the subscription via email before alarms will trigger notifications.")

# Usage
if __name__ == "__main__":
    wp = WordPressProvisioner()
    wp.create_ec2_instance()
    wp.store_pem_in_secrets_manager()
    wp.install_wordpress()
    wp.generate_update_script()
    wp.create_cron_job()
    wp.setup_sns_notifications()
    wp.create_cloudwatch_alarm()
    # wp.terminate_ec2_instance()

EFS

Elastic Beanstalk

Troubleshooting

+-------------------------------------------------------------+
| Elastic Beanstalk Recovery & Debug Checklist                |
+-------------------------------------------------------------+

[ ] Review logs from the failing instance
    - Elastic Beanstalk Console > Logs > Request Logs > Instance Logs
    - Look for:
        • App crash traces
        • Port binding errors
        • Missing environment variables
        • Composer or Docker errors

[ ] Check .ebextensions and container commands
    - Ensure all commands are idempotent
    - Avoid assumptions:
        • Pre-existing directories
        • Mounted volumes
    - Use `|| true` to prevent non-critical failures:
        container_commands:
          01_setup:
            command: "mkdir -p /var/app/data || true"

[ ] Rebuild environment (no database binding)
    - Pros:
        • Clears corrupted deployments
        • Resets configuration drift
        • Reprovisions EC2 instances
        • Tests idempotency
    - Cons:
        • Causes temporary downtime
        • Ephemeral data is lost
        • Requires redeployment if version is not pinned
        • May mask deeper configuration issues

[ ] Terminate EC2 instance manually
    - Pros:
        • Triggers Elastic Beanstalk to replace the instance
        • Faster than full rebuild
        • Preserves environment config
    - Cons:
        • Doesn’t fix root cause
        • May cause load imbalance
        • Risk of repeated failures

[ ] Identify EC2 instance with incorrect application version
    - EC2 Console > Filter by Elastic Beanstalk tags
    - Compare deployed version in Elastic Beanstalk console
    - Terminate mismatched instance via EC2 Console
    - Elastic Beanstalk will auto-replace it

[ ] Fix 502 Bad Gateway after severe state
    - Elastic Beanstalk Console > Check environment health
    - Look for “Severe” or “Degraded” status
    - View logs to confirm proxy or app failure
    - If isolated: terminate instance via EC2 Console
    - If systemic: use “Rebuild Environment” from Actions menu
    - Redeploy correct application version
    - Confirm health returns to “Green”

Log Files

Elastic Beanstalk Log Bundle with CloudFormation Lifecycle Mapping

Log File CloudFormation Lifecycle (Boto3) Description
eb-tools.log CreateStack / UpdateStack Logs from EB CLI and platform tools during environment provisioning and updates.
eb-publish.log CreateChangeSet / ExecuteChangeSet Tracks application version publishing and deployment orchestration.
eb-engine.log ExecuteChangeSet Main EB engine log. Tracks deployment lifecycle, configuration parsing, and container orchestration.
eb-hooks.log ExecuteChangeSet Logs execution of platform lifecycle hooks.
eb-cfn-init.log CreateStack / UpdateStack Wrapper log for cfn-init, capturing provisioning status and errors.
eb-cfn-init-call.log CreateStack / UpdateStack Logs the command used to invoke cfn-init, including metadata and parameters.
cfn-init.log CreateStack / UpdateStack Core log for cfn-init. Tracks resource creation and configuration sets.
cfn-init-cmd.log CreateStack / UpdateStack Detailed command output from cfn-init.
cfn-hup.log UpdateStack Logs from cfn-hup, which monitors metadata changes and triggers updates.
cfn-wire.log SignalResource / DescribeStackResource Tracks metadata fetches and signal responses between instance and CloudFormation.
yum.log CreateStack / UpdateStack Package manager log for Amazon Linux. See comparison of APT, DNF, Pacman, and others.
cloud-init-output.log CreateStack Output from cloud-init during EC2 instance bootstrapping.

Lifecycle Mapping Notes

Kubernetes / Fargate / ECS

Cloudwatch

Cloudfront

Cloudformation

Serverless Framework

Boto

S3 / ETL

S3 <=> Lambda

Downloading from S3 to /tmp

import requests

# Check S3 URL head response
head = requests.head(url)
if head.status_code == 200:
    # Safe to use
else:
    # Regenerate token

Data Glue

Secrets Manager

API Gateway

Logs

Query Snippets

fields @timestamp, @message
| filter status = 403

fields @timestamp, @message
| filter errorMessage = "Forbidden"

fields @timestamp, requestId, extendedRequestId
| filter status = 403

fields @timestamp, integrationErrorMessage
| filter status = 403 and ispresent(integrationErrorMessage)

fields @timestamp, identity.sourceIp
| filter status = 403

fields @timestamp, identity.userAgent
| filter status = 403

fields @timestamp, identity.apiKey
| filter status = 403 and ispresent(identity.apiKey)

fields @timestamp, authorizer.error
| filter status = 403 and ispresent(authorizer.error)

fields @timestamp, resourcePath, stage
| filter status = 403

How to Use

  • Paste any of the snippets above into the CloudWatch Logs Insights query editor.
  • Start with the basic status = 403 filter, then expand with fields like errorMessage, identity.apiKey, or authorizer.error depending on what you’re debugging.
  • Use requestId or extendedRequestId to correlate CloudFront access logs (sc-status=403) with API Gateway execution logs.

Finding the API Gateway CloudWatch Log Group

  1. Open the AWS ConsoleCloudWatchLog groups.

  2. Look for a log group named 'gateway':

  3. If you don’t see it, check the API Gateway consoleStagesLogs/Tracing tab.

  • Ensure Enable CloudWatch Logs is turned on.
  • Specify a log group name (AWS will create it automatically if it doesn’t exist).

Creating the Log Group if It Does Not Exist

  • In API Gateway console, navigate to your API → Stages → select the stage (e.g., prod).
  • Under Logs/Tracing, enable:
  • Enable CloudWatch Logs
  • Enable Access Logging (optional but recommended)
  • Provide a log group name (e.g., /aws/apigateway/my-api/prod).
  • Save changes — API Gateway will automatically create the log group in CloudWatch.

Relationship Between CloudWatch and AWS WAF

  • AWS WAF (Web Application Firewall) can be attached to CloudFront or API Gateway to block or allow traffic.
  • When WAF blocks a request (e.g., due to a rule match), the client sees a 403 Forbidden.
  • These WAF actions are logged to CloudWatch Logs if logging is enabled for the WAF web ACL.

Correlation

  • CloudFront access logs show the 403.
  • API Gateway execution logs show the request context.
  • WAF logs in CloudWatch show the rule that triggered the block.
  • Together, CloudWatch + WAF logs let you distinguish between application-level 403s (API Gateway authorizer, missing API key) and security-level 403s (WAF rule match).

Key Insight

  • CloudFront logs show only the surface symptom (sc-status=403).
  • API Gateway CloudWatch logs reveal the detailed cause (errorMessage, authorizer.error, identity.*).
  • WAF logs explain if the 403 was due to a security rule.
  • Combining all three gives you end-to-end visibility into why a request was denied.

Cognito / Lambda API Proxy

Route 53

⚠️ **GitHub.com Fallback** ⚠️