capi_log - OpenNebula/cluster-api-provider-opennebula GitHub Wiki
Troubleshooting and Logs
This page provides instructions on how to troubleshoot common issues encountered during the CAPONE deployment process, along with guidance on accessing relevant logs to diagnose and resolve potential problems.
Viewing Management Cluster Logs
To monitor the deployment and configuration of the different VMs, or to investigate any erroneous behavior, follow these steps to access the logs.
Step 1: Retrieve pod names
Begin by retrieving the list of deployed pods within the management cluster:
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager-676545ff7c-mg42k 1/1 Running 0 3h14m
capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager-656f587d76-t79dp 1/1 Running 0 3h14m
capi-system capi-controller-manager-59c7f9c475-bdndt 1/1 Running 0 3h14m
capone-system capone-controller-manager-6cbc7489bd-72gvl 1/1 Running 0 3h14m
cert-manager cert-manager-74b56b6655-wwr7h 1/1 Running 0 3h15m
cert-manager cert-manager-cainjector-55d94dc4cc-82x77 1/1 Running 0 3h15m
cert-manager cert-manager-webhook-564f647c66-x24jg 1/1 Running 0 3h15m
kube-system coredns-7c65d6cfc9-djt4g 1/1 Running 0 3h15m
kube-system coredns-7c65d6cfc9-xpw55 1/1 Running 0 3h15m
kube-system etcd-kind-control-plane 1/1 Running 0 3h15m
kube-system kindnet-d44wv 1/1 Running 0 3h15m
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 3h15m
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 3h15m
kube-system kube-proxy-2cd6h 1/1 Running 0 3h15m
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 3h15m
local-path-storage local-path-provisioner-57c5987fd4-68p52 1/1 Running 0 3h15m
Step 2: Access pod logs
To view the logs for a specific pod, such as the capone-controller-manager
pod (responsible for setting up the initial workload cluster), use the following command:
kubectl logs -n capone-system capone-controller-manager-6cbc7489bd-72gvl
Monitoring Workload Cluster Resources
To verify the status of workload cluster components and diagnose potential issues, execute the following commands within the management cluster:
Checking KubeadmControlPlane
To inspect the control plane of the workload cluster, run:
$ kubectl get kcp
NAME CLUSTER INITIALIZED API SERVER AVAILABLE REPLICAS READY UPDATED UNAVAILABLE AGE VERSION
one one true true 1 1 1 0 119m v1.31.4
Inspecting Machines
Retrieve a list of machines within the workload cluster:
$ kubectl get ma
NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
one-bmfkk one one-bmfkk one://1100 Running 119m v1.31.4
one-md-0-dh2ls-dfxqb one one-md-0-dh2ls-dfxqb one://1102 Running 118m v1.31.4
one-md-0-dh2ls-prd6k one one-md-0-dh2ls-prd6k one://1101 Running 118m v1.31.4
Verifying MachineSet Replicas
Additionaly, to verify that the machine set is correctly managing the expected number of replicas, use:
$ kubectl get ms
NAME CLUSTER REPLICAS READY AVAILABLE AGE VERSION
one-md-0-dh2ls one 2 2 2 118m v1.31.4
For more detailed information about workload cluster resources, you can use the following commands:
kubectl describe kcp
kubectl describe ms
kubectl describe ma
These commands provide extended details on the control plane, machine sets, and machines, which can be helpful in troubleshooting complex issues.