Check the Kubernetes cluster - RCOSDP/weko GitHub Wiki

master node

login to master node.

vagrant ssh k8smaster

check the running nodes in this cluster.

$ kubectl get nodes
NAME              STATUS   ROLES    AGE     VERSION
k8smaster.local   Ready    master   21m     v1.17.4
k8snode1.local    Ready    <none>   8m54s   v1.17.4
k8snode2.local    Ready    <none>   4m35s   v1.17.4

check routing table.

$ ip route
default via 10.0.2.2 dev eth0 proto dhcp metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1 
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink 
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
192.168.33.0/24 dev eth1 proto kernel scope link src 192.168.33.101 metric 101

Building a Kubernetes cluster is now complete.

make a snapshot

as you need, take a snapshot.

% vagrant snapshot save init
==> k8smaster: Snapshotting the machine as 'init'...
==> k8smaster: Snapshot saved! You can restore the snapshot at any time by
==> k8smaster: using `vagrant snapshot restore`. You can delete it using
==> k8smaster: `vagrant snapshot delete`.
==> k8snode1: Snapshotting the machine as 'init'...
==> k8snode1: Snapshot saved! You can restore the snapshot at any time by
==> k8snode1: using `vagrant snapshot restore`. You can delete it using
==> k8snode1: `vagrant snapshot delete`.
==> k8snode2: Snapshotting the machine as 'init'...
==> k8snode2: Snapshot saved! You can restore the snapshot at any time by
==> k8snode2: using `vagrant snapshot restore`. You can delete it using
==> k8snode2: `vagrant snapshot delete`.
⚠️ **GitHub.com Fallback** ⚠️