KR_CI_CD - somaz94/python-study GitHub Wiki
GitHub Actions๋ฅผ ์ฌ์ฉํ CI/CD ํ์ดํ๋ผ์ธ ๊ตฌ์ฑ์ด๋ค.
# .github/workflows/python-app.yml
name: Python application
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.9'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8 pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
- name: Lint with flake8
run: |
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
- name: Test with pytest
run: |
pytest
โ
ํน์ง:
- ์๋ํ๋ ํ ์คํธ
- ์ฝ๋ ํ์ง ๊ฒ์ฌ
- ์์กด์ฑ ๊ด๋ฆฌ
- ๊ฐํธํ ์ํฌํ๋ก์ฐ ์ค์
- ๋ค์ํ ํ๊ฒฝ ์ง์
Jenkins๋ฅผ ์ฌ์ฉํ ํ์ดํ๋ผ์ธ ์๋ํ ๊ตฌ์ฑ์ด๋ค.
// Jenkinsfile
pipeline {
agent any
environment {
PYTHON_VERSION = '3.9'
}
stages {
stage('Checkout') {
steps {
checkout scm
}
}
stage('Setup Python') {
steps {
sh """
pyenv install ${PYTHON_VERSION}
pyenv global ${PYTHON_VERSION}
python -m pip install --upgrade pip
"""
}
}
stage('Install Dependencies') {
steps {
sh 'pip install -r requirements.txt'
}
}
stage('Run Tests') {
steps {
sh 'pytest --junitxml=test-results.xml'
}
post {
always {
junit 'test-results.xml'
}
}
}
stage('Build and Deploy') {
when {
branch 'main'
}
steps {
sh '''
docker build -t myapp:${BUILD_NUMBER} .
docker push myapp:${BUILD_NUMBER}
'''
}
}
}
}
โ
ํน์ง:
- ๋จ๊ณ๋ณ ํ์ดํ๋ผ์ธ
- ํ๊ฒฝ ์ค์ ๊ด๋ฆฌ
- ํ ์คํธ ๊ฒฐ๊ณผ ๋ณด๊ณ
- ์กฐ๊ฑด๋ถ ๋ฐฐํฌ
- ํ๋ฌ๊ทธ์ธ ํ์ฅ์ฑ
GitLab์์ ์ ๊ณตํ๋ CI/CD ํ์ดํ๋ผ์ธ ๊ตฌ์ฑ์ด๋ค.
# .gitlab-ci.yml
image: python:3.9
stages:
- test
- build
- deploy
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.pip-cache"
cache:
paths:
- .pip-cache/
before_script:
- python -V
- pip install -r requirements.txt
test:
stage: test
script:
- pytest --cov=.
coverage: '/TOTAL.+ ([0-9]{1,3}%)/'
artifacts:
reports:
coverage_report:
coverage_format: cobertura
path: coverage.xml
build:
stage: build
script:
- python setup.py bdist_wheel
artifacts:
paths:
- dist/
expire_in: 1 week
deploy_staging:
stage: deploy
script:
- pip install twine
- TWINE_PASSWORD=${CI_JOB_TOKEN} TWINE_USERNAME=gitlab-ci-token twine upload --repository-url ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/packages/pypi dist/*
only:
- develop
deploy_production:
stage: deploy
script:
- pip install twine
- twine upload dist/*
only:
- main
when: manual
โ
ํน์ง:
- ๋ฉํฐ ์คํ ์ด์ง ํ์ดํ๋ผ์ธ
- ์บ์ ๊ด๋ฆฌ
- ์ปค๋ฒ๋ฆฌ์ง ๋ฆฌํฌํธ
- ๋น๋ ์ํฐํฉํธ ๋ณด์กด
- ํ๊ฒฝ๋ณ ๋ฐฐํฌ ์ ๋ต
Docker๋ฅผ ํ์ฉํ ์ผ๊ด๋ ํ๊ฒฝ ๊ตฌ์ฑ๊ณผ ๋ฐฐํฌ ์๋ํ์ด๋ค.
# docker-compose.ci.yml
version: '3.8'
services:
test:
build:
context: .
dockerfile: Dockerfile.test
volumes:
- .:/app
command: pytest
build:
build:
context: .
dockerfile: Dockerfile
image: ${CI_REGISTRY_IMAGE}:${CI_COMMIT_SHA}
deploy:
image: ${CI_REGISTRY_IMAGE}:${CI_COMMIT_SHA}
restart: unless-stopped
environment:
- ENVIRONMENT=production
- DATABASE_URL=${DATABASE_URL}
- SECRET_KEY=${SECRET_KEY}
ports:
- "8000:8000"
depends_on:
- db
db:
image: postgres:13-alpine
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_USER=${DB_USER}
- POSTGRES_DB=${DB_NAME}
restart: unless-stopped
volumes:
postgres_data:
# Dockerfile
FROM python:3.9-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
RUN python -m pytest
EXPOSE 8000
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "app:app"]
โ
ํน์ง:
- ์ปจํ ์ด๋ํ๋ ํ ์คํธ
- ์ผ๊ด๋ ํ๊ฒฝ
- ๋ฒ์ ๊ด๋ฆฌ
- ๋ง์ดํฌ๋ก์๋น์ค ๊ตฌ์กฐ ์ง์
- ์ค์ผ์ผ๋ง ์ฉ์ด
๋ค์ํ ํ
์คํธ ์ ๋ต๊ณผ ๋ฐฐํฌ ์๋ํ ๊ตฌ์ฑ์ด๋ค.
# .github/workflows/automated-testing.yml
name: Automated Testing
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main, develop ]
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.8, 3.9]
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Run tests
run: |
pytest --cov=./ --cov-report=xml
- name: Upload coverage
uses: codecov/codecov-action@v2
with:
file: ./coverage.xml
fail_ci_if_error: true
integration-test:
runs-on: ubuntu-latest
needs: test
services:
postgres:
image: postgres:13
env:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_DB: test_db
ports:
- 5432:5432
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.9'
- name: Install dependencies
run: |
pip install -r requirements.txt
pip install -r requirements-dev.txt
- name: Run integration tests
run: |
pytest tests/integration --cov=./ --cov-report=xml
env:
DATABASE_URL: postgresql://postgres:postgres@localhost:5432/test_db
deploy:
runs-on: ubuntu-latest
needs: [test, integration-test]
if: github.ref == 'refs/heads/main'
steps:
- uses: actions/checkout@v2
- name: Deploy to production
uses: appleboy/ssh-action@master
with:
host: ${{ secrets.HOST }}
username: ${{ secrets.USERNAME }}
key: ${{ secrets.SSH_KEY }}
script: |
cd /opt/app
git pull
docker-compose down
docker-compose up -d --build
โ
ํน์ง:
- ๋งคํธ๋ฆญ์ค ํ ์คํธ
- ์ปค๋ฒ๋ฆฌ์ง ๋ถ์
- ์๋ ๋ฐฐํฌ
- ํตํฉ ํ ์คํธ ํ๊ฒฝ
- ์คํจ ์ ๋กค๋ฐฑ ์ ๋ต
๋ณต์กํ ํ๋ก์ ํธ๋ฅผ ์ํ ๊ณ ๊ธ CI/CD ์ ๋ต์ด๋ค.
# .github/workflows/blue-green-deploy.yml
name: Blue-Green Deployment
on:
push:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Build Docker image
run: |
docker build -t myapp:${{ github.sha }} .
docker tag myapp:${{ github.sha }} myregistry.com/myapp:${{ github.sha }}
docker push myregistry.com/myapp:${{ github.sha }}
deploy:
needs: build
runs-on: ubuntu-latest
steps:
- name: Deploy new environment
uses: appleboy/ssh-action@master
with:
host: ${{ secrets.PROD_HOST }}
username: ${{ secrets.USERNAME }}
key: ${{ secrets.SSH_KEY }}
script: |
# ์ ํ๊ฒฝ ๋ฐฐํฌ
NEW_ENV="green"
CURRENT_ENV=$(cat /opt/env_status)
if [ "$CURRENT_ENV" = "green" ]; then
NEW_ENV="blue"
fi
cd /opt
docker-compose -f docker-compose.$NEW_ENV.yml down
# ์ ์ด๋ฏธ์ง๋ก ์
๋ฐ์ดํธ
sed -i "s|image:.*|image: myregistry.com/myapp:${{ github.sha }}|g" docker-compose.$NEW_ENV.yml
# ์ ํ๊ฒฝ ์์
docker-compose -f docker-compose.$NEW_ENV.yml up -d
# ํฌ์ค์ฒดํฌ
sleep 30
if curl -s http://localhost:8080/health | grep -q "ok"; then
# ๋ก๋๋ฐธ๋ฐ์ ์
๋ฐ์ดํธ
sed -i "s/proxy_pass http:\/\/[a-z]*_backend/proxy_pass http:\/\/${NEW_ENV}_backend/g" /etc/nginx/conf.d/app.conf
nginx -s reload
# ํ์ฌ ํ๊ฒฝ ์
๋ฐ์ดํธ
echo $NEW_ENV > /opt/env_status
# ์ด์ ํ๊ฒฝ ์ข
๋ฃ (์ง์ฐ ์๊ฐ ํ)
sleep 60
docker-compose -f docker-compose.$CURRENT_ENV.yml down
else
# ๋ฐฐํฌ ์คํจ ์ ๋กค๋ฐฑ
docker-compose -f docker-compose.$NEW_ENV.yml down
echo "Deployment failed health check" >&2
exit 1
fi
# canary_deploy.py
import os
import time
import requests
import subprocess
def deploy_canary():
"""์นด๋๋ฆฌ ๋ฐฐํฌ ์คํ ์คํฌ๋ฆฝํธ"""
# ์นด๋๋ฆฌ ๋ฒ์ ๋ฐฐํฌ (ํธ๋ํฝ์ 10%)
subprocess.run([
"kubectl", "set", "image", "deployment/app",
f"app-container=myregistry.com/myapp:{os.environ['NEW_VERSION']}",
"--record"
])
# ์นด๋๋ฆฌ ๋น์ค ์ค์
subprocess.run([
"kubectl", "scale", "deployment/app", "--replicas=10"
])
subprocess.run([
"kubectl", "scale", "deployment/app-canary", "--replicas=1"
])
# ๋ชจ๋ํฐ๋ง ๊ธฐ๊ฐ
print("Monitoring canary deployment for 15 minutes...")
for i in range(15):
# ์ค๋ฅ์จ ํ์ธ
error_rate = check_error_rate()
if error_rate > 0.01: # ์ค๋ฅ์จ 1% ์ด๊ณผ ์ ๋กค๋ฐฑ
print(f"Error rate too high: {error_rate}, rolling back")
rollback_canary()
return False
# ์ฑ๋ฅ ์งํ ํ์ธ
latency = check_latency()
if latency > 500: # 500ms ์ด์ ์ง์ฐ์ ๋กค๋ฐฑ
print(f"Latency too high: {latency}ms, rolling back")
rollback_canary()
return False
print(f"Minute {i+1}/15: Metrics within acceptable range")
time.sleep(60)
# ๋ชจ๋ ํธ๋ํฝ์ ์ ๋ฒ์ ์ผ๋ก ์ ํ
print("Canary deployment successful, switching all traffic to new version")
subprocess.run([
"kubectl", "set", "image", "deployment/app",
f"app-container=myregistry.com/myapp:{os.environ['NEW_VERSION']}",
"--record"
])
# ์นด๋๋ฆฌ ์ ๊ฑฐ
subprocess.run([
"kubectl", "scale", "deployment/app-canary", "--replicas=0"
])
return True
def check_error_rate():
"""ํ๋ก๋ฉํ
์ฐ์ค์์ ์ค๋ฅ์จ ํ์ธ"""
response = requests.get(
"http://prometheus:9090/api/v1/query",
params={
"query": 'sum(rate(http_requests_total{status=~"5.."}[5m])) / sum(rate(http_requests_total[5m]))'
}
)
result = response.json()
return float(result['data']['result'][0]['value'][1])
def check_latency():
"""ํ๋ก๋ฉํ
์ฐ์ค์์ ์๋ต ์๊ฐ ํ์ธ"""
response = requests.get(
"http://prometheus:9090/api/v1/query",
params={
"query": 'histogram_quantile(0.95, sum(rate(http_request_duration_seconds_bucket[5m])) by (le))'
}
)
result = response.json()
return float(result['data']['result'][0]['value'][1]) * 1000 # ms๋ก ๋ณํ
def rollback_canary():
"""์นด๋๋ฆฌ ๋กค๋ฐฑ"""
subprocess.run([
"kubectl", "scale", "deployment/app-canary", "--replicas=0"
])
print("Canary deployment rolled back")
if __name__ == "__main__":
deploy_canary()
โ
ํน์ง:
- ๋ธ๋ฃจ/๊ทธ๋ฆฐ ๋ฐฐํฌ
- ์นด๋๋ฆฌ ๋ฐฐํฌ
- ์๋ํ๋ ๋กค๋ฐฑ
- ์ ์ง์ ์ธ ํธ๋ํฝ ์ด์
- ์งํ ๊ธฐ๋ฐ ๋ฐฐํฌ ๊ฒฐ์
CI/CD ํ์ดํ๋ผ์ธ์ ๋ชจ๋ํฐ๋ง๊ณผ ๋ณด์ ๊ด๋ฆฌ ๋ฐฉ๋ฒ์ด๋ค.
# .github/workflows/security-scan.yml
name: Security Scanning
on:
push:
branches: [ main, develop ]
schedule:
- cron: '0 0 * * *' # ๋งค์ผ ์์ ์ ์คํ
jobs:
dependency-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.9'
- name: Install dependencies
run: |
pip install safety bandit
- name: Check for vulnerable dependencies
run: |
safety check -r requirements.txt
- name: Static code security analysis
run: |
bandit -r . -x tests/
container-scan:
runs-on: ubuntu-latest
needs: dependency-scan
steps:
- uses: actions/checkout@v2
- name: Build image
run: |
docker build -t myapp:${{ github.sha }} .
- name: Scan container for vulnerabilities
uses: aquasecurity/trivy-action@master
with:
image-ref: 'myapp:${{ github.sha }}'
format: 'table'
exit-code: '1'
severity: 'CRITICAL,HIGH'
# pipeline_monitor.py
import requests
import datetime
import json
import os
from prometheus_client import Counter, Gauge, start_http_server
# Prometheus ๋ฉํธ๋ฆญ ์ ์
builds_total = Counter('ci_builds_total', 'Total number of CI builds', ['status', 'branch'])
build_duration = Gauge('ci_build_duration_seconds', 'Duration of CI builds', ['branch'])
test_failures = Counter('ci_test_failures_total', 'Total number of test failures')
deployment_success = Counter('ci_deployments_total', 'Total number of deployments', ['environment', 'status'])
def get_github_builds(owner, repo, token):
"""GitHub Actions ์ํฌํ๋ก์ฐ ์คํ ์ ๋ณด ์์ง"""
headers = {'Authorization': f'token {token}'}
url = f'https://api.github.com/repos/{owner}/{repo}/actions/runs'
response = requests.get(url, headers=headers)
if response.status_code == 200:
runs = response.json()['workflow_runs']
for run in runs:
status = run['conclusion'] or 'pending'
branch = run['head_branch']
# ๋น๋ ์นด์ดํฐ ์ฆ๊ฐ
builds_total.labels(status=status, branch=branch).inc()
# ๋น๋ ์๊ฐ ์ธก์
if run['updated_at'] and run['created_at'] and status != 'pending':
start_time = datetime.datetime.fromisoformat(run['created_at'].replace('Z', '+00:00'))
end_time = datetime.datetime.fromisoformat(run['updated_at'].replace('Z', '+00:00'))
duration = (end_time - start_time).total_seconds()
build_duration.labels(branch=branch).set(duration)
# ํ
์คํธ ์คํจ ์์ง
if status == 'failure':
# ํ
์คํธ ์คํจ ์์ธ ์ ๋ณด ๊ฐ์ ธ์ค๊ธฐ
jobs_url = f'https://api.github.com/repos/{owner}/{repo}/actions/runs/{run["id"]}/jobs'
jobs_response = requests.get(jobs_url, headers=headers)
if jobs_response.status_code == 200:
jobs = jobs_response.json()['jobs']
for job in jobs:
if job['conclusion'] == 'failure' and 'test' in job['name'].lower():
test_failures.inc()
def get_deployment_status(owner, repo, token, environment):
"""๋ฐฐํฌ ์ํ ์ ๋ณด ์์ง"""
headers = {'Authorization': f'token {token}'}
url = f'https://api.github.com/repos/{owner}/{repo}/deployments'
response = requests.get(url, headers=headers)
if response.status_code == 200:
deployments = response.json()
for deployment in deployments:
if deployment['environment'] == environment:
# ๋ฐฐํฌ ์ํ ๊ฐ์ ธ์ค๊ธฐ
status_url = deployment['statuses_url']
status_response = requests.get(status_url, headers=headers)
if status_response.status_code == 200:
statuses = status_response.json()
if statuses:
latest_status = statuses[0]['state']
deployment_success.labels(
environment=environment,
status=latest_status
).inc()
def main():
# Prometheus ๋ฉํธ๋ฆญ ์๋ฒ ์์
start_http_server(8000)
# ํ๊ฒฝ ๋ณ์์์ ์ค์ ๊ฐ์ ธ์ค๊ธฐ
owner = os.environ.get('GITHUB_OWNER')
repo = os.environ.get('GITHUB_REPO')
token = os.environ.get('GITHUB_TOKEN')
environments = os.environ.get('DEPLOY_ENVIRONMENTS', 'production,staging').split(',')
while True:
try:
# GitHub ๋น๋ ์ ๋ณด ์์ง
get_github_builds(owner, repo, token)
# ๋ฐฐํฌ ์ํ ์ ๋ณด ์์ง
for env in environments:
get_deployment_status(owner, repo, token, env)
except Exception as e:
print(f"Error collecting metrics: {e}")
# 5๋ถ๋ง๋ค ๋ฐ์ดํฐ ์
๋ฐ์ดํธ
time.sleep(300)
if __name__ == "__main__":
main()
โ
ํน์ง:
- ์ทจ์ฝ์ ์ค์บ ํตํฉ
- ์์กด์ฑ ๋ณด์ ๊ฒ์ฌ
- ์ปจํ ์ด๋ ์ด๋ฏธ์ง ์ค์บ
- ํ์ดํ๋ผ์ธ ์ฑ๋ฅ ๋ชจ๋ํฐ๋ง
- ๋ฐฐํฌ ์ํ ์ถ์
โ
๋ชจ๋ฒ ์ฌ๋ก:
- ํ๊ฒฝ ๋ณ์ ๊ด๋ฆฌ: ๋ฏผ๊ฐํ ์ ๋ณด๋ ํ๊ฒฝ ๋ณ์๋ ์ํฌ๋ฆฟ์ผ๋ก ์์ ํ๊ฒ ๊ด๋ฆฌ
- ๋ณด์ ์ค์ ์ฃผ์: ๋น๋ ๋ฐ ๋ฐฐํฌ ๊ณผ์ ์์ ๋ณด์ ๊ฒ์ฌ ๋จ๊ณ ํฌํจ
- ํ ์คํธ ์๋ํ: ๋จ์, ํตํฉ, E2E ํ ์คํธ๋ฅผ ์๋ํํ์ฌ ํ์ง ๋ณด์ฅ
- ๋ฐฐํฌ ์ ๋ต ์๋ฆฝ: ๋ธ๋ฃจ/๊ทธ๋ฆฐ, ์นด๋๋ฆฌ, ๋กค๋ง ๋ฐฐํฌ ์ค ์ ํฉํ ์ ๋ต ์ ํ
- ๋ชจ๋ํฐ๋ง ๊ตฌ์ถ: ํ์ดํ๋ผ์ธ ์ฑ๋ฅ๊ณผ ๋ฐฐํฌ ๊ฒฐ๊ณผ ๋ชจ๋ํฐ๋ง
- ๋กค๋ฐฑ ๊ณํ ์๋ฆฝ: ๋ฐฐํฌ ์คํจ ์ ์๋ ๋๋ ์๋ ๋กค๋ฐฑ ๋ฉ์ปค๋์ฆ ๊ตฌํ
- ๋ฌธ์ํ ์ ์ง: ํ์ดํ๋ผ์ธ ๊ตฌ์ฑ๊ณผ ํ๊ฒฝ ์ค์ ๋ฌธ์ํ
- ์ฑ๋ฅ ์ต์ ํ: ๋น๋ ์บ์ฑ, ๋ณ๋ ฌ ์์ ํ์ฉ์ผ๋ก ํ์ดํ๋ผ์ธ ์๋ ๊ฐ์
- ์ฝ๋ํ ์ธํ๋ผ: ์ธํ๋ผ ๊ตฌ์ฑ์ ์ฝ๋๋ก ๊ด๋ฆฌํ์ฌ ์ฌํ์ฑ ํ๋ณด
- ๋ธ๋์น ์ ๋ต: ์ ์ ํ Git ๋ธ๋์น ์ ๋ต๊ณผ PR ์ ์ฑ ์๋ฆฝ
- ์ง๋ณด์ ํ ์คํธ: ์ ์ง์ ์ผ๋ก ๋ณต์กํ ํ ์คํธ๋ฅผ ์ ์ฉํ๋ ์ ๋ต
- ์ํฐํฉํธ ๊ด๋ฆฌ: ๋น๋ ๊ฒฐ๊ณผ๋ฌผ์ ๋ฒ์ ๊ด๋ฆฌ์ ๋ณด๊ด ์ ์ฑ ์๋ฆฝ
- ํตํฉ ์๋ฆผ: ํ์ดํ๋ผ์ธ ๊ฒฐ๊ณผ๋ฅผ ํ ์ปค๋ฎค๋์ผ์ด์ ๋๊ตฌ์ ์๋ฆผ
- ๋ฆฌ์์ค ์ ํ: CI/CD ํ๊ฒฝ์ ์์ ์ฌ์ฉ๋ ์ ํ ์ค์
- ์ง์์ ๊ฐ์ : ํ์ดํ๋ผ์ธ ์ฑ๋ฅ๊ณผ ํจ์จ์ฑ ์ ๊ธฐ์ ๊ฒํ ๋ฐ ๊ฐ์