Assessment ‐ Reflection - Campus-Castolo/m300 GitHub Wiki
🌱 Project Reflection – M300
The M300 project was not just another school assignment. It became a full-blown IT learning experience where I challenged myself to architect, automate, secure, and document a modern cloud-native setup using AWS, Docker, and Terraform. It tested both my technical know-how and my ability to plan, reflect, and improve continuously.
🧩 Problems and Challenges
One of the earliest and most persistent challenges was grasping the true scope of the competency matrix (A1–I1). Translating abstract goals like "Entwicklung eines Integrationskonzepts" into real, practical implementations like CI/CD pipelines, security rules, or modular Terraform was tough at first. It required me to connect theory with practice.
Another major issue was documenting in a way that wasn’t too detached or overly technical. I kept asking myself: How do I explain my design decisions without overwhelming the reader? I had to strike a balance between detail and clarity. A great example of this was documenting how I modularized Terraform for better scalability and maintenance.
Technical hurdles I faced included:
- Terraform state management: Handling
terraform.tfstatefiles without corrupting infrastructure taught me about remote state backends and locking. - Lambda + Snapshot Automation: Debugging IAM policies, getting CloudWatch Event triggers to run the snapshot function, and ensuring correct RDS tagging logic—all required precise troubleshooting.
- Security refactoring: Moving all rules to
infrastructure-sec.tfand applying best practices such as least privilege access and tagging.
🔍 Example: Lambda Snapshot Automation
Creating a Lambda that takes automated RDS snapshots and applies timestamped tags was a breakthrough moment for me. It required integrating:
- AWS Lambda (Python)
- IAM role with correct trust relationship
- CloudWatch Events as a trigger
Here’s a simplified version of what I implemented:
snapshot_function.py
import boto3
from datetime import datetime
def lambda_handler(event, context):
rds = boto3.client('rds')
timestamp = datetime.now().strftime('%Y-%m-%d-%H-%M')
db_identifier = 'my-rds-db'
snapshot_id = f"{db_identifier}-{timestamp}"
rds.create_db_snapshot(
DBInstanceIdentifier=db_identifier,
DBSnapshotIdentifier=snapshot_id,
Tags=[{'Key': 'CreatedBy', 'Value': 'Lambda-Automation'}]
)
This script, combined with a Terraform-defined Lambda and trigger, allowed me to automate a process that would otherwise be manual and error-prone.
🛠️ Definitive Skills and Knowledge Gained
🏗️ Terraform Modularization & Best Practices
I learned how to structure Terraform projects the right way:
- DRY principle (Don’t Repeat Yourself)
- Isolating modules for
vpc,ecs,rds,alb,cloudwatch,security - Using
tfvarsfor flexibility and reusability - Separating sensitive logic (e.g. secrets and security groups)
module "ecs" {
source = "./modules/ecs"
cluster_name = var.cluster_name
task_definition = var.task_definition
}
module "rds" {
source = "./modules/rds"
db_name = var.db_name
engine = "mysql"
allocated_storage = 20
}
This structure not only made my project more readable and professional, but also helped me rapidly iterate and improve components without affecting unrelated areas.
🔐 Security (WAF, IAM, VPC Security Groups)
I spent a good portion of the project hardened the infrastructure:
- Created a
security-group-ecs,security-group-rds, etc., in a dedicated Terraform file - Integrated WAF (Web Application Firewall) with ALB
- Used least privilege IAM roles for ECS, RDS, Lambda
Example snippet for security group:
resource "aws_security_group" "security-group-ecs" {
name = "security-group-ecs"
description = "Allow HTTP and HTTPS traffic"
vpc_id = var.vpc_id
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
This part of the project taught me the real-world impact of security—it’s not an afterthought, it’s a pillar.
📈 Monitoring & CI/CD Pipelines
I integrated CloudWatch Logs into ECS for visibility, and created an alarm system for critical metrics. I also implemented a semantic Docker tagging system in GitHub Actions:
- name: Set Tag
run: echo "TAG=$(git describe --tags --always)" >> $GITHUB_ENV
- name: Build & Push to ECR
run: |
docker build -t ${{ env.TAG }} .
docker tag ${{ env.TAG }} $ECR_REPO:${{ env.TAG }}
docker push $ECR_REPO:${{ env.TAG }}
These features gave the infrastructure a production-ready feel and made me appreciate the power of automation and observability.
😮 How I Felt
At the start, I was excited and nervous—especially when writing my first modules and configuring ECS. There was a lot of doubt: Am I overcomplicating this? Will this even work? But soon, as things came together, I found my flow. I ended the core work around week 5–6, after which everything became about optimization and refinement.
From there, I restructured everything:
- Split Terraform into modules
- Improved security group separation
- Automated RDS backups with Lambda
- Refined the GitHub Action pipeline
This phase felt empowering. I wasn’t just "done"—I was making it better.
🌟 Final Thoughts
This project wasn’t just a task—it became a sandbox where I turned cloud theory into cloud reality. I left with:
- A solid understanding of Terraform infrastructure-as-code principles
- Practical skills with AWS Lambda, ECS, RDS, and CI/CD
- Awareness of security and modular design
- A mindset of iteration and clarity in documentation
I now understand what it means to own a project from end to end, and I’ve come out with sharper skills, more confidence, and a deeper appreciation for clean, scalable architecture.
❌ NOTE: The infrastructure has been fully decommissioned and is no longer accessible. However, its architecture, decisions, and lessons live on in my documentation and experience.