minikube n8n sanity checks - akenel/ai-base-starter-kit GitHub Wiki
🔢 The key here is to verify and apply the principles of deduction systematically:
- The Kubernetes cluster is healthy.
- The namespace (
n8n-ns-dev) is active. - The pods within that namespace are running without issues.
- The services and ingress resources are properly configured.
- The storage components (PV, PVC) are bound correctly.
- The Traefik ingress controller is routing traffic as expected.
I shall guide you through the verification process step by step using only kubectl commands and PowerShell (without grep).
kubectl cluster-info
kubectl get namespaces- The first command confirms that the cluster is running and accessible.
- The second checks if the namespace
n8n-ns-devexists.
If the namespace is missing, then either the Helm deployment failed, or there was a misconfiguration.
kubectl get all -n n8n-ns-dev- This provides an overview of all resources within the namespace (
pods,services,deployments, etc.). - If the expected resources are not listed, the Helm deployment may not have applied correctly.
To check Helm releases:
helm list -n n8n-ns-dev- This confirms if the Helm chart was successfully deployed.
- If the deployment is missing, you may need to check
helm status <release-name>for issues.
kubectl get pods -n n8n-ns-dev- If pods show
RunningorCompleted, all is well. - If pods are stuck in
PendingorCrashLoopBackOff, there is a problem.
For more details:
kubectl describe pod <pod-name> -n n8n-ns-dev
kubectl logs <pod-name> -n n8n-ns-dev- These commands reveal possible errors in container startup, image pulling, or configuration issues.
kubectl get svc -n n8n-ns-dev- Ensure services are correctly exposed.
Check if services are forwarding traffic:
kubectl get endpoints -n n8n-ns-dev- If there are no endpoints, the service is not connecting to pods.
kubectl get pv
kubectl get pvc -n n8n-ns-dev-
STATUSshould beBoundfor PVCs; otherwise, there is a storage misconfiguration.
To inspect details:
kubectl describe pvc <pvc-name> -n n8n-ns-devkubectl get ingress -n n8n-ns-dev- Ensure the ingress rules are properly set.
Check if Traefik is running:
kubectl get pods -n kube-system | Where-Object { $_ -match "traefik" }- If the Traefik pod is missing or failing, ingress will not work.
Check logs:
kubectl logs <traefik-pod-name> -n kube-system- This will show whether Traefik is encountering routing errors.
To see if the ingress controller is correctly assigned:
kubectl describe ingress <ingress-name> -n n8n-ns-dev- Look for annotations related to Traefik.
If everything seems fine internally, test if the service is accessible externally.
kubectl get ingress -n n8n-ns-dev- Take the host and attempt a
curlrequest from inside a pod:
kubectl run -i --tty busybox --image=busybox --restart=Never -- sh
wget -O- http://your-ingress-host- If this fails, the problem lies in ingress routing.
- Basic health check: 1-2 minutes.
- Deep debugging: 5-10 minutes (depending on issues).
If everything is running fine, I would expect a total check to take no longer than 5 minutes 👍 purely hypothetical? 🔎