Week 7 – Daily Practice Tasks ‐ Docker Compose & Azure VM - snir1551/DevOps-Linux GitHub Wiki

Task 1 – Create and Run Multi-Container App with Docker Compose

1. backend/app.js

import express from 'express';
import mongoose from 'mongoose';

const app = express();
const PORT = process.env.PORT || 3001;

const mongoUri = 'mongodb://admin:123@mongo:27017/testdb?authSource=admin';

mongoose.connect(mongoUri)
.then(() => console.log('Connected to MongoDB'))
.catch(err => console.error('MongoDB connection error:', err));

app.get('/', (req, res) => {
  res.send('Hello from Backend');
});

app.listen(PORT, () => {
  console.log(`Server running at http://localhost:${PORT}`);
});

2. backend/Dockerfile

FROM node:18-alpine

WORKDIR /app

COPY package*.json ./

RUN npm install

COPY . .

ENV PORT=3001

EXPOSE ${PORT}

CMD ["npm", "run", "dev"]

3. backend/.dockerignore

node_modules
Dockerfile
.dockerignore
.git
.gitignore

4. frontend/Dockerfile

FROM node:18-alpine

WORKDIR /app

COPY package*.json ./

RUN npm install

COPY . .

EXPOSE 3000

CMD ["npm", "start"]

5. frontend/.dockerignore

node_modules
Dockerfile
.dockerignore
.git
.gitignore

4. docker-compose.yml

version: '3.8'

services:
  backend:
    build: ./backend
    ports:
      - "3001:3001"
    depends_on:
      - mongo
    networks:
      - appnet

  frontend:
    build: ./frontend
    ports:
      - "3000:3000"
    depends_on:
      - backend
    networks:
      - appnet

  mongo:
    image: mongo
    ports:
      - "27017:27017"
    environment:
      MONGO_INITDB_ROOT_USERNAME: admin
      MONGO_INITDB_ROOT_PASSWORD: 123
    networks:
      - appnet

networks:
  appnet:
    driver: bridge

5. start containers

docker-compose up -d

6. Verify containers are running

docker-compose ps
  1. Inspect networking
docker exec -it <container name> ping db

Task 2 – Volume Mounting and Persistent Data

1. .env file

PORT=3000
MONGO_HOST=mongo
MONGO_PORT=27017
MONGO_DB=testdb
MONGO_INITDB_ROOT_USERNAME=admin
MONGO_INITDB_ROOT_PASSWORD=123

2. update index.js file:

import express from 'express';
import mongoose from 'mongoose';
import dotenv from 'dotenv';

const app = express();


const beforeEnv = { ...process.env };


dotenv.config();

const PORT = process.env.PORT || 3000;


const loadedFromEnvFile = Object.hasOwn(beforeEnv, 'PORT') === false;

console.log(`PORT loaded from ${loadedFromEnvFile ? '.env file' : 'Dockerfile ENV'}`);

const mongoUri = `mongodb://${process.env.MONGO_HOST}:${process.env.MONGO_PORT}/${process.env.MONGO_DB}`;

mongoose.connect(mongoUri)
.then(() => console.log('Connected to MongoDB'))
.catch(err => console.error('MongoDB connection error:', err));

app.get('/', (req, res) => {
  res.send('Hello from Backend');
});

app.listen(PORT, () => {
  console.log(`Server running at http://localhost:${PORT}`);
});

3. update docker-compose.yml:

version: '3.8'

services:
  backend:
    build: ./backend
    ports:
      - "${BACKEND_PORT}:${BACKEND_PORT}"
    volumes:
      - ./backend:/app
      - /app/node_modules
    env_file:
      - .env
    depends_on:
      - mongo
    networks:
      - appnet

  frontend:
    build: ./frontend
    ports:
      - "${FRONTEND_PORT}:${FRONTEND_PORT}"
    volumes:
      - ./frontend:/app
      - /app/node_modules
    depends_on:
      - backend
    networks:
      - appnet

  mongo:
    image: mongo
    ports:
      - "${MONGO_PORT}:${MONGO_PORT}"
    env_file:
      - .env
    volumes:
      - mongo-data:/data/db
    networks:
      - appnet

volumes:
  mongo-data:

networks:
  appnet:
    driver: bridge

4. Volume Explanation

makes strategic use of Docker volumes to achieve two goals:

  • Persistent data that survives container restarts or re‑builds.

  • Fast local development with automatic code reload inside the containers.

Volumes are the link between your host machine and the running containers. Without them, every docker-compose up --build would start from a blank slate, erasing the database and forcing you to copy source files into the image on every change.

Service Compose entry Purpose
backend ./backend:/app Mounts the entire host backend/ directory into /app inside the container. Any file you change locally becomes instantly visible to the Node.js process, enabling hot‑reload (e.g., with nodemon).
backend /app/node_modules Creates an anonymous volume for node_modules inside the container. This keeps OS‑specific binaries built during npm install isolated from the host file‑system, preventing permission issues and "works on my machine" bugs.
frontend ./frontend:/app Same idea as the backend—live‑mount the React/Vite source for instant feedback.
frontend /app/node_modules Again, isolate compiled dependencies from the host.

bind‑mounts:

  • Rapid iteration – Save a file, refresh the browser, see the change.

  • Editor convenience – Keep using your favourite IDE on the host.

  • Zero rebuilds – Only rebuild the image when dependencies change.

Named Volume (Database Persistence):

Service Compose entry Host path Container path
mongo mongo-data:/data/db Docker‑managed volume called mongo-data MongoDB’s default data directory

mongo-data is declared at the bottom of the file:

volumes:
  mongo-data:

Docker creates this volume the first time you run docker-compose up. Because it lives outside the container’s writable layer, your collections and documents persist when you:

  • Rebuild or upgrade the mongo image.

  • Stop/start containers (docker-compose down/up).

  • Restart your computer.

5. Create .env file:

PORT=3000
MONGO_HOST=mongo
MONGO_PORT=27017
MONGO_DB=testdb
MONGO_INITDB_ROOT_USERNAME=admin
MONGO_INITDB_ROOT_PASSWORD=123

Task 3 – Healthchecks and Logging

1. update docker-compose.yml

version: '3.8'

services:
  backend:
    build: ./backend
    ports:
      - "${BACKEND_PORT}:${BACKEND_PORT}"
    volumes:
      - ./backend:/app
      - /app/node_modules
    env_file:
      - .env
    depends_on:
      - mongo
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3001"]
      interval: 30s
      timeout: 5s
      retries: 3
    networks:
      - appnet

  frontend:
    build: ./frontend
    ports:
      - "${FRONTEND_PORT}:${FRONTEND_PORT}"
    volumes:
      - ./frontend:/app
      - /app/node_modules
    depends_on:
      - backend
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000"]
      interval: 30s
      timeout: 5s
      retries: 3
    networks:
      - appnet

  mongo:
    image: mongo
    ports:
      - "${MONGO_PORT}:${MONGO_PORT}"
    env_file:
      - .env
    volumes:
      - mongo-data:/data/db
    networks:
      - appnet

volumes:
  mongo-data:

networks:
  appnet:
    driver: bridge
  • test: Runs a command to test container health. This command uses curl to send a request to http://localhost:3001. If the response is not HTTP 200 OK, the container is considered unhealthy.
  • interval: 30s Docker runs the healthcheck every 30 seconds.
  • timeout: 5s Each check must complete within 5 seconds, or it is considered failed.
  • retries: 3 The container is marked as unhealthy only after 3 consecutive failed checks.

2. update Dockerfile

FROM node:18-alpine

WORKDIR /app

COPY package*.json ./

RUN npm install

RUN apk add --no-cache curl

COPY . .

EXPOSE 3000

CMD ["npm", "start"]

Task 4 – Docker Compose + CI Integration:

name: CI Pipeline

on:
  push:
    branches: [ main ]
  pull_request:

jobs:
  test-e2e:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up Docker
        uses: docker/setup-buildx-action@v3

      - name: Create .env file
        run: |
          echo "PORT=3001" >> .env
          echo "FRONTEND_PORT=3000" >> .env
          echo "MONGO_HOST=mongo" >> .env
          echo "MONGO_PORT=27017" >> .env
          echo "MONGO_DB=testdb" >> .env
          echo "MONGO_INITDB_ROOT_USERNAME=admin" >> .env
          echo "MONGO_INITDB_ROOT_PASSWORD=admin123" >> .env

      - name: Build and start services
        run: docker-compose up -d --build

      - name: Wait for containers to be healthy
        run: |
          sleep 5
          docker-compose ps
          docker inspect --format='{{json .State.Health}}' task7_daily-backend-1

      - name: Run backend tests
        run: docker-compose exec backend npm test

      - name: Collect logs if tests fail
        if: failure()
        run: |
          docker-compose logs > docker-logs.txt

      - name: Upload logs
        if: always()
        uses: actions/upload-artifact@v4
        with:
          name: docker-logs
          path: docker-logs.txt

      - name: Shut down
        run: docker-compose down

Task 5 – Lightweight Base Images and Optimization

FROM node:18-slim

RUN apt-get update && apt-get install -y curl && rm -rf /var/lib/apt/lists/*

WORKDIR /app

COPY package*.json ./

RUN npm install

COPY . .

ENV PORT=3001

EXPOSE ${PORT}

CMD ["npm", "run", "dev"]

Task 6 – Azure VM Setup and Manual Deployment:

Connect to the Azure VM without password:

ssh azureuser@<vm-public-ip>

Install Docker & Docker Compose

# Update package info
sudo apt update

# Install Docker
sudo apt install -y docker.io

# Enable and start Docker
sudo systemctl enable docker
sudo systemctl start docker

# Install Docker Compose
sudo apt install -y docker-compose

Copy The Project Files to the VM

(go back to local machine with exit) : On local machine, in the project folder

scp -i Linux-VM01_key.pem -r ./DevOps-Linux/week7_practice [email protected]:~/week7practice/

Build and Run the App on the VM

connect to the vm again ( ssh azureuser@ ) and then:

cd ~/week7practice
sudo docker-compose up -d --build

Steps to Stop Docker via Azure Run Command (If your VM stuck):

Step 1 – Sign in to Azure Portal

  1. Go to: https://portal.azure.com/

  2. In the left sidebar (or use the top search bar), click: Virtual Machines

  3. Select your virtual machine (e.g., Linux-VM01)

Step 2 – Open Run Command

  1. Inside the VM panel, scroll down in the left sidebar.

  2. Look for: Run command (It's under the Operations section)

  3. Click on Run command

  4. From the list, select:

    • RunShellScript

Step 3 – Paste the following script:

#!/bin/bash

# Kill all running Docker containers (ignore errors if none are running)
sudo docker kill $(docker ps -q) || true

# Stop the Docker service
sudo systemctl stop docker

# Stop the Docker socket to prevent it from restarting automatically
sudo systemctl stop docker.socket

# Kill any stuck docker-compose processes (if using the old Python-based version)
sudo pkill -9 docker-compose || true

Step 4 – Click Run

  • The script will execute remotely on the VM.
  • You’ll see the output logs below.
  • Once complete, Docker will be fully stopped, and your VM should no longer freeze due to Docker resource usage.

This failed in my case i was because i was using 'cheap' VM with only 344mb, therfore i added Swap:

  • Swap gives you virtual memory using disk. It’s not as fast as RAM, but prevents OOM crashes.
  • Run these commands on the VM:
# Create a 1GB swap file
sudo fallocate -l 1G /swapfile
sudo chmod 600 /swapfile

# Set up the swap space
sudo mkswap /swapfile

# Enable swap
sudo swapon /swapfile

# Make it persistent (so it works after reboot)
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab

# Check result # should see 'Swap:   1.0G   0B   1.0G'
free -h

Expose Public Ports (Backend: 3001, Frontend: 3000)

  • go back to our local (exit)
az vm open-port --resource-group MyResourceGroup --name Linux-VM01 --port 3000 --priority 310
az vm open-port --resource-group MyResourceGroup --name Linux-VM01 --port 3001 --priority 311
  • when tried without --priority i had conflict.
  • Each rule must have a unique priority (between 100 and 4096, lower number = higher priority).

Expose Manually Public Ports (Backend: 3001, Frontend: 3000)

  1. Go to Azure Portal → your VM → Networking tab.

  2. Under Inbound port rules, click + Add inbound port rule.

  3. Fill the form as follows:

    • Source: Any
      → Allows connections from all external IP addresses (can restrict for security).
    • Source port ranges: *
      → Accepts traffic from any source port (standard).
    • Destination: Any
      → Refers to any destination IP within the VM (standard).
    • Destination port ranges: 3000
      → The public port your container is exposed on
    • Protocol: TCP
      → Most web traffic uses TCP; this is the common setting for web apps.
    • Action: Allow
      → Approves traffic instead of denying it.
    • Priority: 1010
      → Determines rule evaluation order; lower = higher priority. Must be unique.
    • Name: AllowPort3000 (or any descriptive name)
  4. Click Add to apply the rule.

  5. Go to Azure Portal → your VM → Networking tab.

  6. Under Inbound port rules, click + Add inbound port rule.

  7. Fill the form as follows:

    • Source: Any
      → Allows connections from all external IP addresses (can restrict for security).
    • Source port ranges: *
      → Accepts traffic from any source port (standard).
    • Destination: Any
      → Refers to any destination IP within the VM (standard).
    • Destination port ranges: 3001
      → The public port your container is exposed on
    • Protocol: TCP
      → Most web traffic uses TCP; this is the common setting for web apps.
    • Action: Allow
      → Approves traffic instead of denying it.
    • Priority: 1020
      → Determines rule evaluation order; lower = higher priority. Must be unique.
    • Name: AllowPort3001 (or any descriptive name)
  8. Click Add to apply the rule.

Verify Application is Running

  • Backend: http://<public_ip>:3001

  • Frontend: http://<public_ip>:3000

  • To check logs or health:

sudo docker ps
sudo docker-compose logs --tail=50

Task 7 – Deploy to Azure VM via CI/CD (GitHub Actions)

we start by Adding repository secrets in GitHub:

(Settings > Secrets and variables > Actions):

VM_HOST → @ VM_SSH_KEY → The Private SSH Key

This is the private key file content (.pem) that GitHub Actions will use to connect to your Azure VM over SSH.

You must paste the entire contents of your private key file, including the header and footer:

-----BEGIN RSA PRIVATE KEY-----
... (many long lines of key content) ...
-----END RSA PRIVATE KEY-----

Important Notes: Do not paste only part of the key (e.g., without the BEGIN/END lines) — that will cause errors like libcrypto error.


Create the Workflow File

name: CD – Deploy to Azure VM Task7

on:
  workflow_run:
    workflows: ["CI – Test & Build Task7"]
    types:
      - completed

jobs:
  deploy:
    if: ${{ github.event.workflow_run.conclusion == 'success' }}
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Write SSH key
        run: |
          echo "${{ secrets.VM_SSH_KEY }}" > key.pem
          chmod 600 key.pem
      - name: Sync files to Azure VM
        run: |
          ssh -i key.pem -o StrictHostKeyChecking=no ${{ secrets.VM_HOST }} "mkdir -p /home/snir1551/week7practice"
          rsync -az --delete --exclude='.git' --exclude='node_modules' -e "ssh -i key.pem -o StrictHostKeyChecking=no" ./week7_practice/ ${{ secrets.VM_HOST }}:/home/snir1551/week7practice/
      - name: Deploy with Docker Compose
        run: |
          ssh -i key.pem -o StrictHostKeyChecking=no ${{ secrets.VM_HOST }} "
            cd /home/snir1551/week7practice &&
            sudo docker-compose down --remove-orphans &&
            sudo docker-compose up -d --build
          "
      - name: Healthcheck & logs
        run: |
          ssh -i key.pem -o StrictHostKeyChecking=no ${{ secrets.VM_HOST }} "
            cd /home/snir1551/week7practice
            sudo docker-compose ps
            sudo docker-compose logs --tail=50
          " > remote_logs.txt
      - name: Upload logs
        uses: actions/upload-artifact@v4
        with:
          name: remote-logs
          path: remote_logs.txt

      - name: Cleanup SSH key
        run: rm key.pem

week7_practice

⚠️ **GitHub.com Fallback** ⚠️