Deployment Guides - Kerdos-Infrasoft-Private-Limited/Kerdos-Infrasoft GitHub Wiki
This guide covers how to deploy Kerdos AI services or the Kerdos API Platform in your environment, ensuring scalability, security, and high availability.
- Docker installed on your server or local machine
- Kubernetes cluster (optional for large-scale deployment)
- Access to cloud services (AWS, Azure, GCP) if deploying in the cloud
- API credentials from Kerdos platform
- Clone the repository:
git clone https://github.com/kerdosdotio/kerdosai.git
cd kerdosai
- Build the Docker image:
docker build -t kerdosai:latest .
- Run the container:
docker run -d -p 8000:8000 kerdosai:latest
- Access the API at
http://localhost:8000
-
Prepare your Kubernetes cluster.
-
Apply deployment manifests:
kubectl apply -f k8s/deployment.yaml
kubectl apply -f k8s/service.yaml
- Verify pods are running:
kubectl get pods
- Access the service through the configured LoadBalancer or Ingress.
- Use Terraform scripts (if available) or cloud provider CLI to provision infrastructure.
- Deploy Docker containers or Kubernetes clusters as per your cloud setup.
- Configure API Gateway and DNS routing.
- Enable monitoring and logging via Prometheus, Grafana, or cloud-native tools.
- Set environment variables for API keys, database URLs, and other secrets securely.
- Configure scaling parameters based on expected load.
- Enable HTTPS and security best practices (firewalls, IAM roles).
- Use Prometheus & Grafana dashboards for real-time monitoring.
- Set up alerts for uptime, latency, and error rates.
- Regularly update images and dependencies for security patches.
-
Check container logs:
docker logs <container_id>
-
For Kubernetes pods:
kubectl logs <pod_name>
-
Verify network connectivity and API endpoint accessibility.
- [Kerdos API Documentation](API-Reference)
- [GitHub Repository](https://github.com/kerdosdotio/kerdosai)
- Contact support at [[email protected]](mailto:[email protected])
© 2025 Kerdos Infrasoft Private Limited