Install Output - csap-platform/csap-core GitHub Wiki
References: Install Notes
12/08/2021 11:56:42 [email protected] Running: install.sh
*
**
***
****
***** command: 'installer/install.sh'
****
***
**
*
'-noPrompt -runCleanUp -deleteContainers -installDisk default -installCsap default -skipOs -ignorePreflight -csapDefinition /root/application.zip'
csapUser: csap
installHome: /opt
CSAP_FOLDER: /opt/csap/csap-platform
csapDefinition: /root/application.zip
isDocker: false
isRunUninstall: false
isDeleteContainers: true
isRunCleaner: true
isHardUmounts: false
isSkipAutoStart: false
isCsapAutoPlay: false
csapFs: default
extraDisk: '' '0'
targetFs: 0
fsType: ext4
Prompts: false
WARNING: -pass was NOT specified - using default
reloading libs: Note: this provides shared definitions
12/08/2021 11:56:42 [email protected] Running: install.sh
csapProcessingFolder: /opt/csap
processesThatMightBeRunning: /opt/csap/java docker containerd /usr/bin/conmon podman kubelet httpd mpstat /opt/csap
*
**
***
****
***** clean_up_previous_installs()
****
***
**
*
clean_up_process_count:
Process Pattern: count
/opt/csap/java: 2
docker: 2
containerd: 9
/usr/bin/conmon: 0
podman: 0
kubelet: 1
httpd: 0
mpstat: 3
/opt/csap: 2
Total: 19
--------------------------------------------- Shutting down csap-agent ---------------------------------------------
Stopping csap: Removed symlink /etc/systemd/system/multi-user.target.wants/csap.service.
------------------------------------------------ Kubernetes Cleaner ------------------------------------------------
------------------------------ perform_kubeadm_reset() cleaning up previous installs ------------------------------
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1208 11:56:45.672306 24971 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get "https://10.22.14.244:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": dial tcp 10.22.14.244:6443: connect: connection refused
[preflight] Running pre-flight checks
W1208 11:56:45.672649 24971 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
W1208 11:56:46.609836 24971 cleanupnode.go:81] [reset] Failed to remove containers: [failed to remove running container 822325ca49c7: output: Error response from daemon: removal of container 822325ca49c7 is already in progress
, error: exit status 1, failed to remove running container 9a4358812741: output: Error response from daemon: removal of container 9a4358812741 is already in progress
, error: exit status 1]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
----------------------------------------- kubernetes systemd cleanup start -----------------------------------------
systemctl listing filtered by pod: pre cleanup:
________________________________________________________________________________________________________________________
kubepods-besteffort-pod69d73921_8c0c_438c_a539_dbe34dbebb85.slice loaded active active libcontainer container kubepods-besteffort-pod69d73921_8c0c_438c_a539_dbe34dbebb85.slice
kubepods-besteffort-pod6cbecc17_cde2_4290_8e93_51bcecea329f.slice loaded active active libcontainer container kubepods-besteffort-pod6cbecc17_cde2_4290_8e93_51bcecea329f.slice
kubepods-besteffort-poddb7d1fc7_53d9_4dbe_9a08_649220e2a2bb.slice loaded active active libcontainer container kubepods-besteffort-poddb7d1fc7_53d9_4dbe_9a08_649220e2a2bb.slice
kubepods-besteffort.slice loaded active active libcontainer container kubepods-besteffort.slice
kubepods-burstable-pod1b392fc509b7a77d756a0ff7db4c1674.slice loaded active active libcontainer container kubepods-burstable-pod1b392fc509b7a77d756a0ff7db4c1674.slice
kubepods-burstable-pod1e787414_0a51_47e0_8abd_e36f1f65d2cf.slice loaded active active libcontainer container kubepods-burstable-pod1e787414_0a51_47e0_8abd_e36f1f65d2cf.slice
kubepods-burstable-pod280667ad0d7e50850368bbc0f843c6a5.slice loaded active active libcontainer container kubepods-burstable-pod280667ad0d7e50850368bbc0f843c6a5.slice
kubepods-burstable-pod64c722c7_d4be_4bb9_b012_c75de5e691ec.slice loaded active active libcontainer container kubepods-burstable-pod64c722c7_d4be_4bb9_b012_c75de5e691ec.slice
kubepods-burstable-pod70c56e1f_bdfc_43b5_80bc_3df41ec3f867.slice loaded active active libcontainer container kubepods-burstable-pod70c56e1f_bdfc_43b5_80bc_3df41ec3f867.slice
kubepods-burstable-pod813b725fc7b5f5c75c918468ed168956.slice loaded active active libcontainer container kubepods-burstable-pod813b725fc7b5f5c75c918468ed168956.slice
kubepods-burstable-pod976a77011933b53459b13d4a85fc75b8.slice loaded active active libcontainer container kubepods-burstable-pod976a77011933b53459b13d4a85fc75b8.slice
kubepods-burstable-pode7bf016b_a628_42cd_be4d_50a4f22ff1fa.slice loaded active active libcontainer container kubepods-burstable-pode7bf016b_a628_42cd_be4d_50a4f22ff1fa.slice
kubepods-burstable-podf497fcfa_a891_4076_add5_56c977a59245.slice loaded active active libcontainer container kubepods-burstable-podf497fcfa_a891_4076_add5_56c977a59245.slice
kubepods-burstable-podfab40516_24e2_43bb_baa6_fde815f7ffbb.slice loaded active active libcontainer container kubepods-burstable-podfab40516_24e2_43bb_baa6_fde815f7ffbb.slice
kubepods-burstable-podfbec7684_9328_4130_baca_1c640a6dcf3e.slice loaded active active libcontainer container kubepods-burstable-podfbec7684_9328_4130_baca_1c640a6dcf3e.slice
kubepods-burstable.slice loaded active active libcontainer container kubepods-burstable.slice
kubepods.slice loaded active active libcontainer container kubepods.slice
________________________________________________________________________________________________________________________
systemctl list-units filtered by pod: post cleanup:
________________________________________________________________________________________________________________________
________________________________________________________________________________________________________________________
Running mount in case kubeadm umounts kubelet due to bug
mount: /dev/mapper/vg0-var_lib_kubelet is already mounted or /var/lib/kubelet busy
/dev/mapper/vg0-var_lib_kubelet is already mounted on /var/lib/kubelet
Checking test nfs: check if exists: /mnt/nfsshare
no process running: /usr/bin/kubelet
------------------------------ removing packages: kubelet kubeadm kubectl from system ------------------------------
yum version unlock: kubelet kubeadm kubectl:
________________________________________________________________________________________________________________________
Loaded plugins: fastestmirror, remove-with-leaves, versionlock
Deleting versionlock for: 0:kubectl-1.21.5-0.*
Deleting versionlock for: 0:kubeadm-1.21.5-0.*
Deleting versionlock for: 0:kubelet-1.21.5-0.*
versionlock deleted: 3
________________________________________________________________________________________________________________________
Loaded plugins: fastestmirror, remove-with-leaves, versionlock
Resolving Dependencies
--> Running transaction check
---> Package kubeadm.x86_64 0:1.21.5-0 will be erased
---> Package kubectl.x86_64 0:1.21.5-0 will be erased
---> Package kubelet.x86_64 0:1.21.5-0 will be erased
--> Processing Dependency: kubelet for package: kubernetes-cni-0.8.7-0.x86_64
--> Running transaction check
---> Package kubernetes-cni.x86_64 0:0.8.7-0 will be erased
--> Finished Dependency Resolution
removing conntrack-tools-1.4.4-7.el7.x86_64. It is not required by anything else.
removing socat-1.7.3.2-2.el7.x86_64. It is not required by anything else.
removing cri-tools-1.19.0-0.x86_64. It is not required by anything else.
removing libnetfilter_cthelper-1.0.0-11.el7.x86_64. It is not required by anything else.
removing libnetfilter_cttimeout-1.0.0-7.el7.x86_64. It is not required by anything else.
removing libnetfilter_queue-1.0.2-2.el7_2.x86_64. It is not required by anything else.
--> Running transaction check
---> Package conntrack-tools.x86_64 0:1.4.4-7.el7 will be erased
---> Package cri-tools.x86_64 0:1.19.0-0 will be erased
---> Package libnetfilter_cthelper.x86_64 0:1.0.0-11.el7 will be erased
---> Package libnetfilter_cttimeout.x86_64 0:1.0.0-7.el7 will be erased
---> Package libnetfilter_queue.x86_64 0:1.0.2-2.el7_2 will be erased
---> Package socat.x86_64 0:1.7.3.2-2.el7 will be erased
--> Finished Dependency Resolution
Dependencies Resolved
================================================================================
Package Arch Version Repository Size
================================================================================
Removing:
conntrack-tools x86_64 1.4.4-7.el7 @base 550 k
cri-tools x86_64 1.19.0-0 @kubernetes-community 23 M
kubeadm x86_64 1.21.5-0 @kubernetes-community 43 M
kubectl x86_64 1.21.5-0 @kubernetes-community 44 M
kubelet x86_64 1.21.5-0 @kubernetes-community 113 M
libnetfilter_cthelper x86_64 1.0.0-11.el7 @base 35 k
libnetfilter_cttimeout x86_64 1.0.0-7.el7 @base 39 k
libnetfilter_queue x86_64 1.0.2-2.el7_2 @base 45 k
socat x86_64 1.7.3.2-2.el7 @base 1.1 M
Removing for dependencies:
kubernetes-cni x86_64 0.8.7-0 @kubernetes-community 55 M
Transaction Summary
================================================================================
Remove 9 Packages (+1 Dependent package)
Installed size: 280 M
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Erasing : kubeadm-1.21.5-0.x86_64 1/10
Erasing : kubelet-1.21.5-0.x86_64 2/10
Erasing : kubernetes-cni-0.8.7-0.x86_64 3/10
Erasing : cri-tools-1.19.0-0.x86_64 4/10
Erasing : kubectl-1.21.5-0.x86_64 5/10
Erasing : conntrack-tools-1.4.4-7.el7.x86_64 6/10
Erasing : libnetfilter_cthelper-1.0.0-11.el7.x86_64 7/10
Erasing : libnetfilter_cttimeout-1.0.0-7.el7.x86_64 8/10
Erasing : libnetfilter_queue-1.0.2-2.el7_2.x86_64 9/10
Erasing : socat-1.7.3.2-2.el7.x86_64 10/10
Verifying : cri-tools-1.19.0-0.x86_64 1/10
Verifying : kubectl-1.21.5-0.x86_64 2/10
Verifying : conntrack-tools-1.4.4-7.el7.x86_64 3/10
Verifying : kubernetes-cni-0.8.7-0.x86_64 4/10
Verifying : libnetfilter_queue-1.0.2-2.el7_2.x86_64 5/10
Verifying : libnetfilter_cttimeout-1.0.0-7.el7.x86_64 6/10
Verifying : kubelet-1.21.5-0.x86_64 7/10
Verifying : libnetfilter_cthelper-1.0.0-11.el7.x86_64 8/10
Verifying : socat-1.7.3.2-2.el7.x86_64 9/10
Verifying : kubeadm-1.21.5-0.x86_64 10/10
Removed:
conntrack-tools.x86_64 0:1.4.4-7.el7
cri-tools.x86_64 0:1.19.0-0
kubeadm.x86_64 0:1.21.5-0
kubectl.x86_64 0:1.21.5-0
kubelet.x86_64 0:1.21.5-0
libnetfilter_cthelper.x86_64 0:1.0.0-11.el7
libnetfilter_cttimeout.x86_64 0:1.0.0-7.el7
libnetfilter_queue.x86_64 0:1.0.2-2.el7_2
socat.x86_64 0:1.7.3.2-2.el7
Dependency Removed:
kubernetes-cni.x86_64 0:0.8.7-0
Complete!
yum: removing /etc/yum.repos.d/kubernetes*
removed ‘/etc/yum.repos.d/kubernetes-el7.repo’
----------------------------------------------- Cleaning file system -----------------------------------------------
Removing '/root/.kube' ...
Removing '/etc/kubernetes' ...
Removing '/var/lib/etcd' ...
Removing '/etc/cni' ...
Removing '/var/run/calico' ...
Removing '/var/lib/calico' ...
Waiting for api server port to be free
________________________________________________________________________________________________________________________
wait_for_port_free kubernetes api server - waiting for filter: '10250' to not be found in 'ss --numeric --processes' (--listen)
________________________________________________________________________________________________________________________
attempt 1 of 10: found: '0'
No remaining instances of '10250'
-------------------------------------------------- Docker cleaner --------------------------------------------------
containers: 0
imageCount: 15
docker: removing all images
Untagged: csapplatform/test-app:latest
Untagged: csapplatform/test-app@sha256:ab53a8f6481faa6141db4fe3c8c3bd174972d8bc69a1b62b49afd122d613cae6
Deleted: sha256:0759d461332a073b33bafd3c55d7503389e73b958f05bca16471cf4eef4cdadd
Deleted: sha256:5edf9800abfe3302314cb99f935cbc1ee50c15c4dbb55dfe12da3d2d2acf5a1a
Deleted: sha256:893e85aa66405834781ef6554ebbf969e25039deb82660141e4a09563b93b933
Deleted: sha256:a84e91779ccaa58c441ce0ff957a475b5d322cddb80afa0e15b6da0b30d61027
Deleted: sha256:a5e47d77c4562888bdc65680227d2a4745f7c302f9ae86b1a7c8fc8fbf3fe739
Deleted: sha256:b6873c0f832ce9f7ba2f0daa7b1f74e6470ff0fd10837d0694299bf09df76d12
Deleted: sha256:8782df55e9554286c6a25abe7b94e220704cbd6fa4378a75e67bdfd33d8db92a
Deleted: sha256:559a43cff01fc65822d7418b75f2b681399aaf0b042a09a035c48813390b568d
Deleted: sha256:1855256707116c0c229fec2d3a60bce4a11fdfc8b0bffa9663c84e69ec326160
Untagged: k8s.gcr.io/kube-apiserver:v1.21.5
Untagged: k8s.gcr.io/kube-apiserver@sha256:2a88e173873a9870bea873f3907557aefee3fe6887859b648cea06a38fa08227
Deleted: sha256:7b2ac941d4c3053c5d5b9ccaf9de1eb50c2f086b7f0461ff37d22d26c7ab14e4
Deleted: sha256:4ea64bf33ab00286ac080aecacfc9aab9b371d4193a20f4b7ac5d3baa6323ded
Untagged: k8s.gcr.io/kube-scheduler:v1.21.5
Untagged: k8s.gcr.io/kube-scheduler@sha256:af426d1982a88ed5a4facd0cac3d670fee77575ed9d7cf5e98dc7b653f8f8ceb
Deleted: sha256:8e60ea3644d6dd6c1ed564f36ab9ef7bf19bfde909a5dcc4d974ac21ea1b3b3e
Deleted: sha256:245562cf2b3b90d6a183a3066c35125a1fd1ef0e07189f9496ce4fbab54bfa40
Untagged: k8s.gcr.io/kube-controller-manager:v1.21.5
Untagged: k8s.gcr.io/kube-controller-manager@sha256:1c3ae19287a122c4d58b6b7ffb030ea8962edfae798d9986445f85cdc1de1844
Deleted: sha256:184ef4d127b40e20fe2fee7bc542de289e083a62e929d07346fc6a643c583659
Deleted: sha256:6655718c417c86ff7b65b3d2b8ab6fd48b43a01fb7b58ef90b3b8a301623e99b
Deleted: sha256:a806f5b7bc3f9778a94397b1a32c3f383ab83f9f3a95df96ced77d76f7f6622a
Deleted: sha256:16679402dc206c982b5552ab8de7d898547100e5468be29d4f67d393c0eadfdb
Untagged: k8s.gcr.io/kube-proxy:v1.21.5
Untagged: k8s.gcr.io/kube-proxy@sha256:b9f3f8764f5b74085edbed44202aa83dab3555638ea9de182b9f79b6728bee5b
Deleted: sha256:e08abd2be7302b7de4cc9d06bc6045be005c870f93065814da1d761980f7218d
Deleted: sha256:705f37a2bdd125ae29c295812f47a6c11ee29e98e3e0a492db6bc0284fe4494f
Deleted: sha256:48b90c7688a2c85d7081a437ecba5cb706fbaa98b09def0b206dbbe39e3af558
Untagged: fluent/fluentd-kubernetes-daemonset:v1.14.0-debian-elasticsearch7-1.1
Untagged: fluent/fluentd-kubernetes-daemonset@sha256:a4a8c55a4b0e5d9300ff6099a6cdc9cdbd6a96954acc6087a04d4e64601a03c0
Deleted: sha256:af2b8b697e28f011089a8a7b4f2cfd34939bfc0af188605bd962f025298891cf
Deleted: sha256:18417ffaf4768a0a37cd47e48ab3bd4ffae2287211227bb0062e8b47db97415d
Deleted: sha256:56da23d32984db6f93758f9a4875b5be45c5a60a770493ea192e7311377a1219
Deleted: sha256:935f152cd572142347fcb3254d12394a0b6d9ac966a9c3793d6113d7a134e6b3
Deleted: sha256:95397f956738159b2d8b58a346d3b6f999ec91980313fe4f15567c7a5c84be00
Deleted: sha256:8b20c67d5e30b8971b539c3095530f608091c84b988622cf3e5f7a0a466f56a3
Deleted: sha256:8a60773074f542f5e8c598c601c2d75b1c53a19e9ad1ce1ab146129df129f043
Deleted: sha256:c325f4ea27e9d751a2354f6cb37c3fc7cf65f8b0128011d45d89d9030780952f
Deleted: sha256:93f7c1895f27bade892642e8980b77b0a424f3d333177f63ce89097bcce3337d
Deleted: sha256:b71b82935e6d22c327f6749a7925518d2aad4abcd8ec0317970a4afed9522f85
Deleted: sha256:b8442bad30dc8d00e7074cc2281c79a1fa1387c43257ec20825e83bf30d1ae1a
Deleted: sha256:2c51875d82756e7ce5813d1b7e6a99ccbfcfa7e89ed3ec68157edf50cc97c315
Deleted: sha256:70a778ed941ab8c9bf5540b76922d5669e5d27b72f190c0fc5136e686dd6df89
Deleted: sha256:612c965a757426e6982c668d9e428c6c740824e10a2c236c9216c1a7aa4fcf3d
Deleted: sha256:b479586363729db3d5b6d1db5275eccdab56d0110402abdafaa3c908ff853edd
Deleted: sha256:b20b8ea0c465075a5c5d059456a03a4f83b27d625df860f6ba1ae00c7a2fc13e
Deleted: sha256:54d397397aefcdcf0e1464a0678f536aecc991860cf29f99c84eb42d25076c2a
Deleted: sha256:c843a7c114c4969ecf16fc03009deb71e4cdd6f66e500aedf143264ba6576cdf
Deleted: sha256:191655971f23cba6fa0c4024cde85aa4be92f4fb53cc3655ca91c5adb586c191
Deleted: sha256:d30075c08dde5bc51c00b64bc9dcead47b10f6bd46d9c033ebb0b59418783b42
Deleted: sha256:80b925afc53e9c904c0e351bd13b1bc7e7823ece9c67e1d3ef169f409971e072
Deleted: sha256:f68ef921efae588b3dd5cc466a1ca9c94c24785f1fa9420bea15ecc2dedbe781
Untagged: quay.io/brancz/kube-rbac-proxy:v0.11.0
Untagged: quay.io/brancz/kube-rbac-proxy@sha256:b62289c3f3f883ee76dd4e8879042dd19abff743340e451cb59f9654fc472e4f
Deleted: sha256:29589495df8d994d4b8b7a59053569a43fb35c001fdf151984b225cbf4718d32
Deleted: sha256:445fe7592d9ea57290f3d5efc90f8b392147bb51b1ae8326e58def101c72aa88
Deleted: sha256:07363fa8421000ad294c2881d17b0535aabdd17ced2a874eb354a9d8514d3d59
Untagged: quay.io/prometheus/node-exporter:v1.2.2
Untagged: quay.io/prometheus/node-exporter@sha256:a990408ed288669bbad5b5b374fe1584e54825cde4a911c1a3d6301a907a030c
Deleted: sha256:0fafea14985942e880dd5b7df98f97f51a2ac25a2eb901a78e53e8b21cfb21c2
Deleted: sha256:6ed7f0a2d8f2ffad4ea1bccbde5d0bcd8da19ebade2ad497815aa8e25efa5200
Deleted: sha256:c412f31db56f989d5d559f6311cf9e2197872390bd13b893353fc1c1a8679984
Deleted: sha256:36b45d63da70799f32c6e82de9be2e75167099b010def8771cf847c0e4c9c975
Untagged: k8s.gcr.io/ingress-nginx/controller@sha256:a1e4efc107be0bb78f32eaec37bef17d7a0c81bec8066cdf2572508d21351d0b
Deleted: sha256:bf621a764db545fb64e7681c42e44ef33ad58165696359fbd26edc9a7e74a0c5
Deleted: sha256:2a0fdcd4b965f5341fb4b9ef072bc4f9abfefe4c18bd2eeccf7618adfdf97e94
Deleted: sha256:c119c7395046c7109463a45e3ca3537c673b9b502f05d366abf0c8cb43868562
Deleted: sha256:fc8d65aebe0c956524ba98efd96a98828d0c8b87c5dcdd82f4002b9000045674
Deleted: sha256:5515b2175cc0c1c13b70381337939a83119e9f8319e5b5c47add4fb094b2c626
Deleted: sha256:a45418a0d21cdab33563db6486406063b4a37ead99a0ee50ed12204824672f2c
Deleted: sha256:be1a72861b36532a5b25cea219ce948ed8287724b9a80be9cb07f24ee57ee7d2
Deleted: sha256:45d6568c35497e878c6df73fdc37a1c4cd1e41d95726ef2b91fe93bd1d710c7f
Deleted: sha256:7e30dc9b3d0348ccff50d2b9274c0de2349400329c83a4e3f875ec24d89901f3
Deleted: sha256:50b3239cbc9bf52910305743acffe556c01e5ca26b4d45230ebb61af03624ce1
Deleted: sha256:51fd880a8c3f74d651e043f0886b4be4a9eea40b93f7883371e759cc85d5b570
Deleted: sha256:4caad879642304dd6f69aa3047ceb965a516e525a54fff4c108c6099eed4f198
Deleted: sha256:e165fc745f6031851c6de042a2d5162ad21d5f71ffa31fd2215b672535205f3b
Deleted: sha256:145fbfd6eeb86c53b56011192a19a876c0c379b79020ed80f85fe491b73a3bf7
Deleted: sha256:b2d5eeeaba3a22b9b8aa97261957974a6bd65274ebd43e1d81d0a7b8b752b116
Untagged: calico/node:v3.18.0
Untagged: calico/node@sha256:e207db848adb688ae3edac439c650b0ec44c0675efc3b735721f7541e0320100
Deleted: sha256:5a7c4970fbc29f122f97f65426add223d4da1253b63b6a02ab98ae0d27f4b3f5
Deleted: sha256:c19b81a150f09b94fb04aea7ebb777c97fe6adff9668a3aa0ee4c6d9e17e1acb
Deleted: sha256:e41d494538a1468ce2562ef44023af623c99bfc78a14b678dcb246c8ce33edbd
Untagged: calico/kube-controllers:v3.18.0
Untagged: calico/kube-controllers@sha256:12515fb6e2f35a2ce86133cb2f17fa8039381c9ac031c2b300af398f8589651f
Deleted: sha256:9a154323fbf74983ed091e9ab196ed16d720aab79d1dd88d3cd9b7c43721dc38
Deleted: sha256:e32bc5b3cb790f9a24ac7d86dfb59d272bca2c963a968d1ac076cd93c8a755a8
Deleted: sha256:5b36ab4cc6d089d23c7d0c66da84103c6927838a7502c65514df206a0ab6b202
Deleted: sha256:aafb089d72d0d255d0f6686be3bbe10212b5e44bdc78dc72278009b9887ee946
Deleted: sha256:df192152c1d4140e5d5f968c2e81c75798333df472368bc4c528cb9c9b4ae596
Untagged: k8s.gcr.io/pause:3.4.1
Untagged: k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810
Deleted: sha256:0f8457a4c2ecaceac160805013dc3c61c63a1ff3dee74a473a36249a748e0253
Deleted: sha256:915e8870f7d1d4024c7dcbd1b5d50fe34c526a1545d738a988817ae793d68503
Untagged: k8s.gcr.io/coredns/coredns:v1.8.0
Untagged: k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e
Deleted: sha256:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899
Deleted: sha256:ca6a7c5e057ea6ce85926f47f1b5d76eaf48ab262cf56d1e021d07e8d190950f
Deleted: sha256:225df95e717ceb672de0e45aa49f352eace21512240205972aca0fccc9612722
Untagged: k8s.gcr.io/etcd:3.4.13-0
Untagged: k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2
Deleted: sha256:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934
Deleted: sha256:f3cecccfe2bea1cbd18db5eae847c3a9c8253663bf30a41288f541dc1470b41e
Deleted: sha256:a03fdfe95db0baad20d1170a5fd176873e8adc7fcab86dc638d60bcbac54553b
Deleted: sha256:d2374d0a05c42f2237ce15e9211e0a34590d487b98fd0120034dc86b995b7cce
Deleted: sha256:51cd589a7dc43b7d504cb02feb86e92b6033c225c960c42ebd05aca6fbae92c0
Deleted: sha256:d72a74c56330b347f7d18b64d2effd93edd695fde25dc301d52c37efbcf4844e
Warning: Stopping docker.service, but it can still be activated by:
docker.socket
no process running: /usr/bin/dockerd
----------------------------------------- removing docker rpms from system -----------------------------------------
yum version unlock: lvm2* docker-ce docker-ce-cli:
________________________________________________________________________________________________________________________
Loaded plugins: fastestmirror, remove-with-leaves, versionlock
Deleting versionlock for: 3:docker-ce-20.10.6-3.el7.*
versionlock deleted: 1
________________________________________________________________________________________________________________________
Loaded plugins: fastestmirror, remove-with-leaves, versionlock
Resolving Dependencies
--> Running transaction check
---> Package docker-ce.x86_64 3:20.10.6-3.el7 will be erased
--> Processing Dependency: docker-ce for package: docker-ce-rootless-extras-20.10.11-3.el7.x86_64
---> Package docker-ce-cli.x86_64 1:20.10.11-3.el7 will be erased
--> Processing Dependency: docker-ce-cli for package: docker-scan-plugin-0.9.0-3.el7.x86_64
--> Running transaction check
---> Package docker-ce-rootless-extras.x86_64 0:20.10.11-3.el7 will be erased
---> Package docker-scan-plugin.x86_64 0:0.9.0-3.el7 will be erased
--> Finished Dependency Resolution
removing fuse-overlayfs-0.7.2-6.el7_8.x86_64. It is not required by anything else.
removing slirp4netns-0.4.3-4.el7_8.x86_64. It is not required by anything else.
removing containerd.io-1.4.12-3.1.el7.x86_64. It is not required by anything else.
removing fuse3-libs-3.6.1-4.el7.x86_64. It is not required by anything else.
removing 2:container-selinux-2.119.2-1.911c772.el7_8.noarch. It is not required by anything else.
removing policycoreutils-python-2.5-34.el7.x86_64. It is not required by anything else.
removing libcgroup-0.41-21.el7.x86_64. It is not required by anything else.
removing audit-libs-python-2.8.5-4.el7.x86_64. It is not required by anything else.
removing checkpolicy-2.5-8.el7.x86_64. It is not required by anything else.
removing setools-libs-3.3.8-4.el7.x86_64. It is not required by anything else.
removing libsemanage-python-2.5-14.el7.x86_64. It is not required by anything else.
removing python-IPy-0.75-6.el7.noarch. It is not required by anything else.
--> Running transaction check
---> Package audit-libs-python.x86_64 0:2.8.5-4.el7 will be erased
---> Package checkpolicy.x86_64 0:2.5-8.el7 will be erased
---> Package container-selinux.noarch 2:2.119.2-1.911c772.el7_8 will be erased
---> Package containerd.io.x86_64 0:1.4.12-3.1.el7 will be erased
---> Package fuse-overlayfs.x86_64 0:0.7.2-6.el7_8 will be erased
---> Package fuse3-libs.x86_64 0:3.6.1-4.el7 will be erased
---> Package libcgroup.x86_64 0:0.41-21.el7 will be erased
---> Package libsemanage-python.x86_64 0:2.5-14.el7 will be erased
---> Package policycoreutils-python.x86_64 0:2.5-34.el7 will be erased
---> Package python-IPy.noarch 0:0.75-6.el7 will be erased
---> Package setools-libs.x86_64 0:3.3.8-4.el7 will be erased
---> Package slirp4netns.x86_64 0:0.4.3-4.el7_8 will be erased
--> Finished Dependency Resolution
Dependencies Resolved
================================================================================
Package Arch Version Repository Size
================================================================================
Removing:
audit-libs-python x86_64 2.8.5-4.el7 @base 316 k
checkpolicy x86_64 2.5-8.el7 @base 1.2 M
container-selinux noarch 2:2.119.2-1.911c772.el7_8
@extras 41 k
containerd.io x86_64 1.4.12-3.1.el7 @docker-ce-stable-community 108 M
docker-ce x86_64 3:20.10.6-3.el7 @docker-ce-stable-community 115 M
docker-ce-cli x86_64 1:20.10.11-3.el7 @docker-ce-stable-community 139 M
fuse-overlayfs x86_64 0.7.2-6.el7_8 @extras 116 k
fuse3-libs x86_64 3.6.1-4.el7 @extras 270 k
libcgroup x86_64 0.41-21.el7 @base 134 k
libsemanage-python x86_64 2.5-14.el7 @base 441 k
policycoreutils-python
x86_64 2.5-34.el7 @base 1.2 M
python-IPy noarch 0.75-6.el7 @base 119 k
setools-libs x86_64 3.3.8-4.el7 @base 1.8 M
slirp4netns x86_64 0.4.3-4.el7_8 @extras 169 k
Removing for dependencies:
docker-ce-rootless-extras
x86_64 20.10.11-3.el7 @docker-ce-stable-community 20 M
docker-scan-plugin x86_64 0.9.0-3.el7 @docker-ce-stable-community 13 M
Transaction Summary
================================================================================
Remove 14 Packages (+2 Dependent packages)
Installed size: 401 M
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Erasing : docker-ce-rootless-extras-20.10.11-3.el7.x86_64 1/16
Erasing : 3:docker-ce-20.10.6-3.el7.x86_64 2/16
Erasing : containerd.io-1.4.12-3.1.el7.x86_64 3/16
Erasing : 2:container-selinux-2.119.2-1.911c772.el7_8.noarch 4/16
Erasing : policycoreutils-python-2.5-34.el7.x86_64 5/16
Erasing : docker-scan-plugin-0.9.0-3.el7.x86_64 6/16
Erasing : 1:docker-ce-cli-20.10.11-3.el7.x86_64 7/16
Erasing : fuse-overlayfs-0.7.2-6.el7_8.x86_64 8/16
Erasing : python-IPy-0.75-6.el7.noarch 9/16
Erasing : fuse3-libs-3.6.1-4.el7.x86_64 10/16
Erasing : audit-libs-python-2.8.5-4.el7.x86_64 11/16
Erasing : checkpolicy-2.5-8.el7.x86_64 12/16
Erasing : setools-libs-3.3.8-4.el7.x86_64 13/16
Erasing : libcgroup-0.41-21.el7.x86_64 14/16
Erasing : libsemanage-python-2.5-14.el7.x86_64 15/16
Erasing : slirp4netns-0.4.3-4.el7_8.x86_64 16/16
Verifying : 1:docker-ce-cli-20.10.11-3.el7.x86_64 1/16
Verifying : docker-scan-plugin-0.9.0-3.el7.x86_64 2/16
Verifying : fuse3-libs-3.6.1-4.el7.x86_64 3/16
Verifying : fuse-overlayfs-0.7.2-6.el7_8.x86_64 4/16
Verifying : libsemanage-python-2.5-14.el7.x86_64 5/16
Verifying : 3:docker-ce-20.10.6-3.el7.x86_64 6/16
Verifying : slirp4netns-0.4.3-4.el7_8.x86_64 7/16
Verifying : 2:container-selinux-2.119.2-1.911c772.el7_8.noarch 8/16
Verifying : python-IPy-0.75-6.el7.noarch 9/16
Verifying : checkpolicy-2.5-8.el7.x86_64 10/16
Verifying : docker-ce-rootless-extras-20.10.11-3.el7.x86_64 11/16
Verifying : policycoreutils-python-2.5-34.el7.x86_64 12/16
Verifying : audit-libs-python-2.8.5-4.el7.x86_64 13/16
Verifying : setools-libs-3.3.8-4.el7.x86_64 14/16
Verifying : containerd.io-1.4.12-3.1.el7.x86_64 15/16
Verifying : libcgroup-0.41-21.el7.x86_64 16/16
Removed:
audit-libs-python.x86_64 0:2.8.5-4.el7
checkpolicy.x86_64 0:2.5-8.el7
container-selinux.noarch 2:2.119.2-1.911c772.el7_8
containerd.io.x86_64 0:1.4.12-3.1.el7
docker-ce.x86_64 3:20.10.6-3.el7
docker-ce-cli.x86_64 1:20.10.11-3.el7
fuse-overlayfs.x86_64 0:0.7.2-6.el7_8
fuse3-libs.x86_64 0:3.6.1-4.el7
libcgroup.x86_64 0:0.41-21.el7
libsemanage-python.x86_64 0:2.5-14.el7
policycoreutils-python.x86_64 0:2.5-34.el7
python-IPy.noarch 0:0.75-6.el7
setools-libs.x86_64 0:3.3.8-4.el7
slirp4netns.x86_64 0:0.4.3-4.el7_8
Dependency Removed:
docker-ce-rootless-extras.x86_64 0:20.10.11-3.el7
docker-scan-plugin.x86_64 0:0.9.0-3.el7
Complete!
---------------------------------------- removing /etc/yum.repos.d/docker* ----------------------------------------
removed ‘/etc/yum.repos.d/docker-ce.repo’
systemctl: Removing start up configuration in /etc/systemd/system/docker.service.d
removed ‘/etc/systemd/system/docker.service.d/docker.conf’
removed directory: ‘/etc/systemd/system/docker.service.d’
skipping: device-mapper-persistent-data - clean manually if required
------------------------ removing all virtual interfaces: starting with br, cali, and tunl ------------------------
------------------------------------------ Remaining Network Interfaces: ------------------------------------------
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:95:a7:ef brd ff:ff:ff:ff:ff:ff
inet 10.22.14.244/22 brd 10.22.15.255 scope global noprefixroute dynamic ens192
valid_lft 411680sec preferred_lft 411680sec
inet6 fe80::250:56ff:fe95:a7ef/64 scope link noprefixroute
valid_lft forever preferred_lft forever
7: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
inet 192.168.203.64/32 brd 192.168.203.64 scope global tunl0
valid_lft forever preferred_lft forever
5296: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:bf:f4:ca:3a brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
---------------------------------- Removing routes containing either 'bird|cali' ----------------------------------
-------------------------- Remaining routes, to clean up: 'ip route del <line-from-list>' --------------------------
default via 10.22.12.1 dev ens192 proto dhcp metric 100
10.22.12.0/22 dev ens192 proto kernel scope link src 10.22.14.244 metric 100
------------------------------------------------- Purging iptable -------------------------------------------------
----------------------------------------------- cleaning up iptables -----------------------------------------------
table filter: found '273' routes
post flush: '3' remain.
table nat: found '47' routes
post flush: '4' remain.
table mangle: found '21' routes
post flush: '5' remain.
table raw: found '16' routes
post flush: '2' remain.
Note: view details using 'iptables --list-rules'
----------------------------------------- Removing docker folder contents -----------------------------------------
Unmounting: mounts underneath: '/var/lib/docker'
mounts [Passed] found: 13 filesystems
removing: '/var/lib/docker/*'
cleanLocalItemsOnly(): folder: /var/lib/docker contains: '214' items, and '214' are local filesystem
note: no mount points detected - recusively removing contents of /var/lib/docker
------------------------------------------ Checking for running processes ------------------------------------------
user csap: Running killall and pkill on user
/opt/csap/java: no matches found
docker: no matches found
containerd: no matches found
/usr/bin/conmon: no matches found
podman: no matches found
kubelet: no matches found
httpd: no matches found
mpstat: no matches found
/opt/csap: no matches found
clean_up_process_count:
Process Pattern: count
/opt/csap/java: 0
docker: 0
containerd: 0
/usr/bin/conmon: 0
podman: 0
kubelet: 0
httpd: 0
mpstat: 0
/opt/csap: 0
Total: 0
-------------------------------- Pre clean process matches: '19', post clean: '0' --------------------------------
processes listing:
________________________________________________________________________________________________________________________
UID PID PPID C STIME TTY TIME CMD
root 1 0 1 2020 ? 5-12:22:04 /usr/lib/systemd/systemd --system --deserialize 20
root 348 1 0 May04 ? 00:00:00 /usr/sbin/lvmetad -f
root 703 1 0 2020 ? 00:00:39 /usr/lib/systemd/systemd-udevd
root 935 1 0 2020 ? 01:44:16 /sbin/auditd
dbus 957 1 0 2020 ? 2-18:51:28 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
root 959 1 0 2020 ? 08:20:35 /usr/sbin/NetworkManager --no-daemon
root 960 1 0 2020 ? 01:01:39 /usr/sbin/irqbalance --foreground
polkitd 962 1 0 2020 ? 12:17:31 /usr/lib/polkit-1/polkitd --no-debug
root 965 1 0 2020 ? 1-01:04:35 /usr/lib/systemd/systemd-logind
root 966 1 0 2020 ? 00:00:00 /usr/bin/VGAuthService -s
root 967 1 0 2020 ? 12:34:29 /usr/bin/vmtoolsd
root 977 1 0 2020 ? 00:00:00 /usr/sbin/gssproxy -D
rpc 978 1 0 2020 ? 00:00:59 /sbin/rpcbind -w
root 1292 1 0 2020 ? 00:00:16 /usr/sbin/sshd -D
root 1293 1 0 2020 ? 01:03:07 /usr/bin/python2 -Es /usr/sbin/tuned -l -P
root 1300 1 0 2020 ? 06:53:09 /usr/sbin/rsyslogd -n
rpcuser 1345 1 0 2020 ? 00:00:00 /usr/sbin/rpc.statd
root 1499 1 0 2020 tty1 00:00:00 /sbin/agetty --noclear tty1 linux
root 2351 959 0 2020 ? 00:00:02 /sbin/dhclient -d -q -sf /usr/libexec/nm-dhcp-helper -pf /var/run/dhclient-ens192.pid -lf /var/lib/NetworkManager/dhclient-03da7500-2101-c722-2438-d0d006c28c73-ens192.lease -cf /var/lib/NetworkManager/dhclient-ens192.conf ens192
postfix 2567 2626 0 10:22 ? 00:00:00 pickup -l -t unix -u
root 2626 1 0 2020 ? 00:10:48 /usr/libexec/postfix/master -w
postfix 2629 2626 0 2020 ? 00:02:04 qmgr -l -t unix -u
root 7626 1 0 2020 ? 00:10:29 /usr/sbin/crond -n
root 11316 1 0 Aug12 ? 09:22:30 /usr/lib/systemd/systemd-journald
root 24688 1 0 11:56 ? 00:00:00 /bin/bash installer/install.sh -noPrompt -runCleanUp -deleteContainers -installDisk default -installCsap default -skipOs -ignorePreflight -csapDefinition /root/application.zip
root 25622 703 0 11:56 ? 00:00:00 /usr/lib/systemd/systemd-udevd
root 25623 703 0 11:56 ? 00:00:00 /usr/lib/systemd/systemd-udevd
root 25788 24688 0 11:56 ? 00:00:00 /bin/bash installer/install.sh -noPrompt -runCleanUp -deleteContainers -installDisk default -installCsap default -skipOs -ignorePreflight -csapDefinition /root/application.zip
root 25789 25788 0 11:56 ? 00:00:00 ps -ef
chrony 26104 1 0 Mar04 ? 00:01:57 /usr/sbin/chronyd
________________________________________________________________________________________________________________________
*
**
***
****
***** csap preflight
****
***
**
*
network [Passed] detected ens192
distribution [Passed] discovered CentOS
version [Passed] discovered 7.8.2003
packages [Passed] found 520 packages, maximum is 600
mounts [Passed] found: 13 filesystems
filesystem [Passed] verified 16G /run, size: 16
filesystem [Passed] verified 69G /var/lib/docker, size: 69
filesystem [Passed] verified 30G /var/lib/kubelet, size: 30
filesystem [Passed] verified 25G /opt, size: 25
process [Passed] found 31 processes, maximum is 55
process [Passed] no csap processes found
________________________________________________________________________________________________________________________
OS Installation
________________________________________________________________________________________________________________________
user exists: 'csap', skipping create
12/08/2021 11:57:02 [email protected] Running: install-csap-sudo.sh
--------- /root/installer/install-csap-sudo.sh - csapUser: 'csap', csapBin: '/opt/csap/csap-platform/bin' ---------
removing requiretty to enable webapp maintenance
/etc/sudoers:
________________________________________________________________________________________________________________________
csap ALL=NOPASSWD: /usr/bin/pmap
csap ALL=NOPASSWD: /sbin/service
csap ALL=NOPASSWD: /bin/kill
csap ALL=NOPASSWD: /bin/rm
csap ALL=NOPASSWD: /bin/nice
csap ALL=NOPASSWD: /usr/bin/pkill
csap ALL=NOPASSWD: /opt/csap/csap-platform/bin/csap-renice.sh
csap ALL=NOPASSWD: /opt/csap/csap-platform/bin/csap-run-as-root.sh
csap ALL=NOPASSWD: /opt/csap/csap-platform/bin/csap-deploy-as-root.sh
csap ALL=NOPASSWD: /opt/csap/csap-platform/bin/csap-unzip-as-root.sh
________________________________________________________________________________________________________________________
---------------------------------------------- isSkipOsConfiguration ----------------------------------------------
skipping configuration of kernel, security limits and os_package installation
Verifying required packages are installed: jq tar zip unzip nmap-ncat dos2unix psmisc net-tools wget dos2unix sysstat lsof bind-utils yum-utils yum-plugin-remove-with-leaves yum-plugin-versionlock
installed: jq
installed: tar
installed: zip
installed: unzip
installed: nmap-ncat
installed: dos2unix
installed: psmisc
installed: net-tools
installed: wget
installed: dos2unix
installed: sysstat
installed: lsof
installed: bind-utils
installed: yum-utils
installed: yum-plugin-remove-with-leaves
installed: yum-plugin-versionlock
*
**
***
****
***** Running csap_user_install
****
***
**
*
location: '/opt/csap'
partition setup: skipped - install will occur under /opt/csap
Deleting: /opt/csap
csap: creating /root/installer/simple.bashrc and /root/installer/simple.bash_profile
permissions: Updating '/opt/csap' to 'csap'
certs: no custom certs provided
docker: Adding group and membership to avoid a restart if/when docker is installed
result: groupadd: group 'docker' already exists Adding user csap to group docker
--------------------------------------- adding permissions for user: 'csap' ---------------------------------------
# file: /root/installer
# owner: root
# group: root
user::rwx
user:csap:r-x
group::rwx
mask::rwx
other::r-x
# file: csap-host-21.11.zipcsap-host-21.11.zip
# owner: root
# group: root
user::rw-
user:csap:r-x
group::r--
mask::r-x
other::r--
chmod 755:
# file: /root/installer
# owner: root
# group: root
user::rwx
user:csap:r-x
group::rwx
mask::rwx
other::r-x
chmod 755:
# file: /root
# owner: root
# group: root
user::rwx
user:csap:r-x
group::r-x
mask::r-x
other::r-x
------------------ switching to user 'csap' to launch: '/root/installer/installer-csap-user.sh' ------------------
12/08/2021 11:57:04 [email protected] Running: installer-csap-user.sh
*
**
***
****
***** command: '/root/installer/installer-csap-user.sh'
****
***
**
*
'-noPrompt -runCleanUp -deleteContainers -installDisk default -installCsap default -skipOs -ignorePreflight -csapDefinition /root/application.zip'
csapUser: csap
installHome: /opt
CSAP_FOLDER: /opt/csap/csap-platform
csapDefinition: /root/application.zip
isDocker: false
isRunUninstall: false
isDeleteContainers: true
isRunCleaner: true
isHardUmounts: false
isSkipAutoStart: false
isCsapAutoPlay: false
csapFs: default
extraDisk: '' '0'
targetFs: 0
fsType: ext4
Prompts: false
WARNING: -pass was NOT specified - using default
reloading libs: Note: this provides shared definitions
12/08/2021 11:57:04 [email protected] Running: installer-csap-user.sh
csapProcessingFolder: /opt/csap
processesThatMightBeRunning: /opt/csap/java docker containerd /usr/bin/conmon podman kubelet httpd mpstat /opt/csap
environment: creating csapEnvironmentFile: '/opt/csap/.csapEnvironment'
----------------------------- Running normal install in current directory: '/opt/csap' -----------------------------
Unzipping: /root/installer/../csap-host*.zip to '/opt/csap'
/opt/csap/.bashrc: copied from /opt/csap/csap-platform/bin/admin.bashrc
12/08/2021 11:57:08 [email protected] Running: installer-csap-user.sh
------------------------------------------------- installing java -------------------------------------------------
creating: '/opt/csap/csap-platform/working/csap-package-java'
mkdir: created directory ‘/opt/csap/csap-platform/working’
mkdir: created directory ‘/opt/csap/csap-platform/working/csap-package-java’
loading: /opt/csap/csap-platform/bin/csap-environment.sh, with messages hidden
extracting: /opt/csap/csap-platform/packages/csap-package-java.zip to /opt/csap/csap-platform/working/csap-package-java
loading: /opt/csap/csap-platform/working/csap-package-java/csap-api.sh
------------------------------------------------ CSAP Java Package ------------------------------------------------
jdkDistribution: 'OpenJDK11U-jdk_x64_linux_11.0.12_7.tar.gz'
version: '11.0.12_7'
invoking: api_service_start, current folder: '/opt/csap/csap-platform/working/csap-package-java'
creating: /opt/csap/csap-platform/working/csap-package-java/version/11.0.12_7
mkdir: created directory ‘/opt/csap/csap-platform/working/csap-package-java/version’
mkdir: created directory ‘/opt/csap/csap-platform/working/csap-package-java/version/11.0.12_7’
source: /opt/csap/csap-platform/working/csap-package-java
javaFolderName: openjdk-11.0.12_7
installPath: /opt/csap/java/openjdk-11.0.12_7
mkdir: created directory ‘/opt/csap/java’
java base: /opt/csap/java
deleting: JAVA11_HOME from /opt/csap/.csapEnvironment
delete_all_in_file: WARNING: no occurences of 'JAVA11_HOME' in '/opt/csap/.csapEnvironment'
append_file() : line: 'export JAVA11_HOME=/opt/csap/java/openjdk-11.0.12_7' to file: '/opt/csap/.csapEnvironment'
default java: detected because /opt/csap/csap-platform/working/csap-package-java matches csap-package-java
delete_all_in_file: WARNING: no occurences of 'JAVA_HOME' in '/opt/csap/.csapEnvironment'
append_file() : line: 'export JAVA_HOME=/opt/csap/java/openjdk-11.0.12_7' to file: '/opt/csap/.csapEnvironment'
append_file() : line: 'export PATH=$JAVA_HOME/bin:$PATH' to file: '/opt/csap/.csapEnvironment'
mkdir: created directory ‘temp’
extracting: /opt/csap/csap-platform/packages/csap-package-java.secondary/*jdk*.tar.gz
destination: /opt/csap/java/temp
moving: /opt/csap/java/temp to /opt/csap/java/openjdk-11.0.12_7
permissions: running chmod --recursive 555 /opt/csap/java/openjdk-11.0.12_7
----------------------------------- setup_definition(): '/root/application.zip' -----------------------------------
------------------------- Using local definition file '/root/installer/../application.zip' -------------------------
template: replacing with csap-dev04
replace_all_in_file: WARNING: no occurences of 'csap_def_template_host' in '/opt/csap/csap-platform/definition/csap-dev-project.json'
csap-admin.jar: Copying csap-agent.jar to csap-admin.jar as they use the same binary
‘/opt/csap/csap-platform/packages/csap-agent.jar’ -> ‘/opt/csap/csap-platform/packages/csap-admin.jar’
----------------------------------------- installer-csap-user.sh completed -----------------------------------------
-------------------------------------- Removing permissions for user: 'csap' --------------------------------------
# file: /root
# owner: root
# group: root
user::rwx
group::r-x
mask::r-x
other::r-x
*
**
***
****
***** Configuring /etc/systemd/system/csap.service
****
***
**
*
Created symlink from /etc/systemd/system/multi-user.target.wants/csap.service to /etc/systemd/system/csap.service.
------------------------------------------ systemctl status csap.service ------------------------------------------
● csap.service - Csap Agent Service
Loaded: loaded (/etc/systemd/system/csap.service; enabled; vendor preset: disabled)
Active: active (exited) since Wed 2021-12-08 11:57:13 EST; 16ms ago
Process: 26238 ExecStart=/opt/csap/csap-platform/bin/csap-start.sh -d (code=exited, status=0/SUCCESS)
Main PID: 26238 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/csap.service
└─26448 /opt/csap/java/openjdk-11.0.12_7/bin/java -Dspring.profiles.active=dev,agent,company -Dsun.rmi.transport.tcp.responseTimeout=3000 -XX:MaxMetaspaceSize=144M -Xms512M -Xmx512M -Dorg.csap.needStatefulRestart=yes -DcsapProcessId=csap-agent -DCSAP_FOLDER=/opt/csap/csap-platform -Djava.io.tmpdir=/opt/csap/csap-platform/working/csap-agent/java-io-tmpdir org.springframework.boot.loader.JarLauncher --server.port=8011
Dec 08 11:57:12 csap-dev04.lab.sensus.net csap-start.sh[26238]: *
Dec 08 11:57:12 csap-dev04.lab.sensus.net csap-start.sh[26238]: pwd: /opt/csap/csap-platform/working/csap-agent
Dec 08 11:57:12 csap-dev04.lab.sensus.net csap-start.sh[26238]: useRoot: no
Dec 08 11:57:12 csap-dev04.lab.sensus.net csap-start.sh[26238]: logs: location '/opt/csap/csap-platform/working/csap-agent/logs/console.log', append: 'appendLogs'
Dec 08 11:57:12 csap-dev04.lab.sensus.net csap-start.sh[26238]: Arguments: -Dspring.profiles.active=dev,agent,company -Dsun.rmi.transport.tcp.responseTimeout=3000 -XX:MaxMetaspaceSize=144M -Xms512M -Xmx512M -Dorg.csap.needStatefulRestart=yes -DcsapProcessId=csap-agent -DCSAP_FOLDER=/opt/csap/csap-platform -Djava.io.tmpdir=/opt/csap/csap-platform/working/csap-agent/java-io-tmpdir org.springframework.boot.loader.JarLauncher --server.port=8011
Dec 08 11:57:12 csap-dev04.lab.sensus.net csap-start.sh[26238]: return code: 0
Dec 08 11:57:12 csap-dev04.lab.sensus.net csap-start.sh[26238]: pidFile: .pid
Dec 08 11:57:13 csap-dev04.lab.sensus.net csap-start.sh[26238]: *
Dec 08 11:57:13 csap-dev04.lab.sensus.net csap-start.sh[26238]: **
Dec 08 11:57:13 csap-dev04.lab.sensus.net systemd[1]: Started Csap Agent Service.
*
**
***
****
***** CSAP install complete. To validate: http://csap-dev04.lab.sensus.net:8011
****
***
**
*
csap-dev04 11:59:25 December 8 2021
Executing OS command on host csap-dev04:/opt/csap/csap-platform/bin/csap-start.sh_P_
12/08/2021 11:59:25 [email protected] Running: csap-start.sh
cli: process_csap_cli_args() parsing parameters
JAVA_HOME: '/opt/csap/java/openjdk-11.0.12_7'
java -version: 'OpenJDK 64-Bit Server VM 18.9 (build 11.0.12+7, mixed mode)'
*
**
***
****
***** csap-start.sh: ensuring processes are stopped
****
***
**
*
process check: No existing processes found
________________________________________________________________________________________________________________________
wait_for_port_free csap-start.sh: pre start check - waiting for filter: '8014' to not be found in 'ss --numeric --processes' (--listen)
________________________________________________________________________________________________________________________
attempt 1 of 2: found: '0'
No remaining instances of '8014'
working folder: mkdir: created directory ‘/opt/csap/csap-platform/working/kubelet’
csap api: checking for /opt/csap/csap-platform/working/kubelet/csap-api.sh
extracting: /opt/csap/csap-platform/packages/kubelet.zip to /opt/csap/csap-platform/working/kubelet
scripts: Found scripts in package, running dos2unix
permissions: running chmod -R 755 /opt/csap/csap-platform/working/kubelet
Loading: csap-api.sh
*
**
***
****
***** CSAP kubelet package
****
***
**
*
kubernetesVersion: 1.21.5
rpm version: 1.21.5-0
image version: v1.21.5
k8PodSubnet: 192.168.0.0/16
singleMasterUntaint: yes
imageRepository: none
api server cipherSuites: none
api server strictDirectives: none
kubernetesRepo: http://media.lab.sensus.net/media/third_party/kubernetes/kubernetes-el7.repo
calico_ip_method: interface=ens192
veth_mtu: 1440
kubernetesMasters: csap-dev04
kubernetesMasterDns: not-specified
kubernetesAllInOne: false
kubernetesStorage: /var/lib/kubelet
isForceIpForApiServer: false
etcdFolder: /var/lib/etcd
masterBackupFolder: /mnt/CSAP_DEV01_NFS/kubernetes-backups
kubernetesAllInOne: false
________________________________________________________________________________________________________________________
Starting kubelet
________________________________________________________________________________________________________________________
mkdir: created directory ‘/opt/csap/csap-platform/working/kubelet/logs’
service resources: No custom settings found, if desired add files to /opt/csap/csap-platform/definition/resources/kubelet using csap editor.
________________________________________________________________________________________________________________________
Building: /opt/csap/csap-platform/working/kubelet/vcenter-env.sh
________________________________________________________________________________________________________________________
‘/opt/csap/csap-platform/working/kubelet/configuration/os/vcenter-env.sh’ -> ‘/opt/csap/csap-platform/working/kubelet/vcenter-env.sh’
----- run_using_root: 'rm --recursive --force --verbose /root/vcenter ; mkdir --parents --verbose /root/vcenter ; cp /opt/csap/csap-platform/working/kubelet/vcenter-env.sh /root/vcenter ; cp /opt/csap/csap-platform/bin/csap-environment.sh /root/vcenter; cp --recursive /opt/csap/csap-platform/bin/functions /root/vcenter; cp /opt/csap/csap-platform/bin/govc /root/vcenter' -----
removed ‘/root/vcenter/csap-environment.sh’
removed ‘/root/vcenter/functions/print.sh’
removed ‘/root/vcenter/functions/container.sh’
removed ‘/root/vcenter/functions/misc.sh’
removed ‘/root/vcenter/functions/network.sh’
removed ‘/root/vcenter/functions/service.sh’
removed directory: ‘/root/vcenter/functions’
removed ‘/root/vcenter/govc’
removed ‘/root/vcenter/vcenter-env.sh’
removed directory: ‘/root/vcenter’
mkdir: created directory ‘/root/vcenter’
________________________________________________________________________________________________________________________
Creating kubeadm configuration: '/opt/csap/csap-platform/working/kubelet/configuration/kubeadm/kubeadm-Dec-08-11-59.yaml'
________________________________________________________________________________________________________________________
append_file: Note: specified targetFile '/opt/csap/csap-platform/working/kubelet/configuration/kubeadm/kubeadm-Dec-08-11-59.yaml', does not exist, creating
append_file(): file '/opt/csap/csap-platform/working/kubelet/configuration/kubeadm/cluster-configuration.yaml' to file: '/opt/csap/csap-platform/working/kubelet/configuration/kubeadm/kubeadm-Dec-08-11-59.yaml'
replace_all_in_file: Replacing 1 occurences of 'ETCD_FOLDER' with '/var/lib/etcd' in '/opt/csap/csap-platform/working/kubelet/configuration/kubeadm/kubeadm-Dec-08-11-59.yaml'
append_file() : line: '\n---\n' to file: '/opt/csap/csap-platform/working/kubelet/configuration/kubeadm/kubeadm-Dec-08-11-59.yaml'
append_file(): file '/opt/csap/csap-platform/working/kubelet/configuration/kubeadm/init-configuration.yaml' to file: '/opt/csap/csap-platform/working/kubelet/configuration/kubeadm/kubeadm-Dec-08-11-59.yaml'
replace_all_in_file: Replacing 1 occurences of 'KUBERNETES_STORAGE' with '/var/lib/kubelet' in '/opt/csap/csap-platform/working/kubelet/configuration/kubeadm/kubeadm-Dec-08-11-59.yaml'
replace_all_in_file: WARNING: no occurences of 'MASTER_HOST' in '/opt/csap/csap-platform/working/kubelet/configuration/kubeadm/kubeadm-Dec-08-11-59.yaml'
replace_all_in_file: Replacing 1 occurences of 'JOIN_TOKEN' with '584t76.b0b7c7r75rbc0ml0' in '/opt/csap/csap-platform/working/kubelet/configuration/kubeadm/kubeadm-Dec-08-11-59.yaml'
replace_all_in_file: Replacing 1 occurences of 'K8_IMAGE_VERSION' with 'v1.21.5' in '/opt/csap/csap-platform/working/kubelet/configuration/kubeadm/kubeadm-Dec-08-11-59.yaml'
replace_all_in_file: Replacing 1 occurences of 'K8_POD_SUBNET' with '192.168.0.0/16' in '/opt/csap/csap-platform/working/kubelet/configuration/kubeadm/kubeadm-Dec-08-11-59.yaml'
Type: Master (number: 1), kubeadm parameters:
________________________________________________________________________________________________________________________
init --config /opt/csap/csap-platform/working/kubelet/configuration/kubeadm/kubeadm-Dec-08-11-59.yaml --ignore-preflight-errors=SystemVerification,DirAvailable--var-lib-etcd
________________________________________________________________________________________________________________________
kubeadm configuration file: '/opt/csap/csap-platform/working/kubelet/configuration/kubeadm/kubeadm-Dec-08-11-59.yaml':
________________________________________________________________________________________________________________________
#
# core: https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2
#
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
# Dual stack support
#featureGates:
# IPv6DualStack: false
#
# required for iptable rules on calico
#
networking:
# default is 192.168.0.0/16
podSubnet: 192.168.0.0/16
kubernetesVersion: v1.21.5
#
# NOTE: #__ will be replaced by csap-api.sh based on settings
#
etcd:
local:
dataDir: /var/lib/etcd
#__imageRepository: updatedByInstaller
#__extraArgs1:
#__cipher-suites: updatedByInstaller
#
# ref https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/
#
#__apiServer:
#__extraArgs2:
#__tls-cipher-suites: updatedByInstaller
#__strict-transport-security-directives: updatedByInstaller
#__imageRepository: updatedByInstaller
#__dns:
#__imageRepository: updatedByInstaller
# when running ha: HOST:Port . for dev, use csap-dev01.lab.sensus.net:6443 and nginx
#__controlPlaneEndpoint: MASTER_DNS
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
#__tlsCipherSuites: updatedByInstaller
#__failSwapOn: updatedByInstaller
---
#
# init configuration: loaded on primary master
# Refer to: https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2
#
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: "584t76.b0b7c7r75rbc0ml0"
ttl: 4h0m0s
# ttl: 24h0m0s
# usages:
# - signing
# - authentication
#api.advertiseAddress: 10.1.165.22
nodeRegistration:
kubeletExtraArgs:
root-dir: "/var/lib/kubelet"
#cloud-provider: "openstack"
#cloud-config: "/etc/kubernetes/cloud.conf"
#cgroup-driver: "systemd"
#logtostderr: "false"
#log-dir: "/opt/csap/processing/kubelet_8014/logs"
________________________________________________________________________________________________________________________
________________________________________________________________________________________________________________________
nfs configuration: nfs_server: '10.22.10.59' , nfs_path :'/CSAP_DEV01_NFS', nfs_mount:'/mnt/CSAP_DEV01_NFS'
________________________________________________________________________________________________________________________
Attempting to mount nfs:
________________________________________________________________________________________________________________________
________________________________________________________________________________________________________________________
nfs_add_mount() - mount source: '10.22.10.59:/CSAP_DEV01_NFS', mount_target: '/mnt/CSAP_DEV01_NFS', mount_options: 'vers=3', exitIfPresent: true
________________________________________________________________________________________________________________________
________________________________________________________________________________________________________________________
INFO: existing mount found '10.22.10.59:/CSAP_DEV01_NFS', skipping remaining commands.
________________________________________________________________________________________________________________________
________________________________________________________________________________________________________________________
________________________________________________________________________________________________________________________
Creating environment file for kubeadm commands
________________________________________________________________________________________________________________________
append_file: Note: specified targetFile '/opt/csap/csap-platform/working/kubelet/install-variables.sh', does not exist, creating
append_file() : line: '# generated file' to file: '/opt/csap/csap-platform/working/kubelet/install-variables.sh'
________________________________________________________________________________________________________________________
'/opt/csap/csap-platform/working/kubelet/install-variables.sh':
# generated file
#
export csapUser=csap
export kubernetesAllInOne=false
export kubernetesStorage=/var/lib/kubelet
export kubeletExtraArgs=""
export calico_ip_method="interface=ens192"
export veth_mtu="1440"
export CSAP_FOLDER=/opt/csap/csap-platform
export AGENT_ENDPOINT=":8011"
export csapName=kubelet
export csapProcessId=kubelet
export csapWorkingDir=/opt/csap/csap-platform/working/kubelet
export csapPrimaryPort=8014
export masterBackupFolder=/mnt/CSAP_DEV01_NFS/kubernetes-backups
export imageRepository="none"
export kubeadmParameters="init --config /opt/csap/csap-platform/working/kubelet/configuration/kubeadm/kubeadm-Dec-08-11-59.yaml --ignore-preflight-errors=SystemVerification,DirAvailable--var-lib-etcd"
export kubernetesMasters="csap-dev04"
export kubernetesMasterDns=not-specified
export k8Version=1.21.5-0
export k8ImageVersion=v1.21.5
export isForceIpForApiServer=false
export kubernetesRepo=http://media.lab.sensus.net/media/third_party/kubernetes/kubernetes-el7.repo
export clusterToken="584t76.b0b7c7r75rbc0ml0"
________________________________________________________________________________________________________________________
12/08/2021 11:59:26 [email protected] Running: csap-deploy-as-root.sh
________________________________________________________________________________________________________________________
install() - kubelet, kubeadm, and dependencies
________________________________________________________________________________________________________________________
current directory: '/opt/csap/csap-platform/working/kubelet'
________________________________________________________________________________________________________________________
wait_for_port_free kubernetes ports: pre-os - waiting for filter: '9099' to not be found in 'ss --numeric --processes' (--listen)
________________________________________________________________________________________________________________________
attempt 1 of 10: found: '0'
No remaining instances of '9099'
________________________________________________________________________________________________________________________
wait_for_port_free kubernetes ports: pre-os - waiting for filter: '10250' to not be found in 'ss --numeric --processes' (--listen)
________________________________________________________________________________________________________________________
attempt 1 of 10: found: '0'
No remaining instances of '10250'
________________________________________________________________________________________________________________________
wait_for_port_free kubernetes ports: pre-os - waiting for filter: '10251' to not be found in 'ss --numeric --processes' (--listen)
________________________________________________________________________________________________________________________
attempt 1 of 10: found: '0'
No remaining instances of '10251'
________________________________________________________________________________________________________________________
wait_for_port_free kubernetes ports: pre-os - waiting for filter: '10252' to not be found in 'ss --numeric --processes' (--listen)
________________________________________________________________________________________________________________________
attempt 1 of 10: found: '0'
No remaining instances of '10252'
________________________________________________________________________________________________________________________
wait_for_port_free kubernetes ports: pre-os - waiting for filter: '10253' to not be found in 'ss --numeric --processes' (--listen)
________________________________________________________________________________________________________________________
attempt 1 of 10: found: '0'
No remaining instances of '10253'
________________________________________________________________________________________________________________________
wait_for_port_free kubernetes ports: pre-os - waiting for filter: '10254' to not be found in 'ss --numeric --processes' (--listen)
________________________________________________________________________________________________________________________
attempt 1 of 10: found: '0'
No remaining instances of '10254'
________________________________________________________________________________________________________________________
wait_for_port_free kubernetes ports: pre-os - waiting for filter: '10255' to not be found in 'ss --numeric --processes' (--listen)
________________________________________________________________________________________________________________________
attempt 1 of 10: found: '0'
No remaining instances of '10255'
________________________________________________________________________________________________________________________
wait_for_port_free kubernetes ports: pre-os - waiting for filter: '10256' to not be found in 'ss --numeric --processes' (--listen)
________________________________________________________________________________________________________________________
attempt 1 of 10: found: '0'
No remaining instances of '10256'
________________________________________________________________________________________________________________________
wait_for_port_free kubernetes ports: pre-os - waiting for filter: '10257' to not be found in 'ss --numeric --processes' (--listen)
________________________________________________________________________________________________________________________
attempt 1 of 10: found: '0'
No remaining instances of '10257'
________________________________________________________________________________________________________________________
wait_for_port_free kubernetes ports: pre-os - waiting for filter: '10258' to not be found in 'ss --numeric --processes' (--listen)
________________________________________________________________________________________________________________________
attempt 1 of 10: found: '0'
No remaining instances of '10258'
________________________________________________________________________________________________________________________
wait_for_port_free kubernetes ports: pre-os - waiting for filter: '10259' to not be found in 'ss --numeric --processes' (--listen)
________________________________________________________________________________________________________________________
attempt 1 of 10: found: '0'
No remaining instances of '10259'
________________________________________________________________________________________________________________________
wait_for_port_free kubernetes ports: pre-os - waiting for filter: '30080' to not be found in 'ss --numeric --processes' (--listen)
________________________________________________________________________________________________________________________
attempt 1 of 10: found: '0'
No remaining instances of '30080'
________________________________________________________________________________________________________________________
install_os() - configuring kubernetes dependencies
________________________________________________________________________________________________________________________
adding repo: http://media.lab.sensus.net/media/third_party/kubernetes/kubernetes-el7.repo
yum enabled repositories:
________________________________________________________________________________________________________________________
base/7/x86_64 CentOS-7 - Base enabled: 10,072
docker-ce-stable-community/7 Docker CE Stable - x86_64 enabled: 135
epel/x86_64 Extra Packages for Enterpris enabled: 13,690
extras/7/x86_64 CentOS-7 - Extras enabled: 500
kubernetes-community Kubernetes enabled: 739
updates/7/x86_64 CentOS-7 - Updates enabled: 3,187
________________________________________________________________________________________________________________________
Disabling swap (no k8s support) 'swapoff -a'
Security-Enhanced Linux:
________________________________________________________________________________________________________________________
SELinux status: enabled
SELinuxfs mount: /sys/fs/selinux
SELinux root directory: /etc/selinux
Loaded policy name: targeted
Current mode: enforcing
Mode from config file: enforcing
Policy MLS status: enabled
Policy deny_unknown status: allowed
Max kernel policy version: 31
________________________________________________________________________________________________________________________
-------- Installing packages: 'yum install --assumeyes kubelet-1.21.5-0 kubeadm-1.21.5-0 kubectl-1.21.5-0' --------
Loaded plugins: fastestmirror, remove-with-leaves, versionlock
Loading mirror speeds from cached hostfile
* base: mirror.wdc1.us.leaseweb.net
* epel: epel.mirror.constant.com
* extras: linux-mirrors.fnal.gov
* updates: mirror.siena.edu
Excluding 1 update due to versionlock (use "yum versionlock status" to show it)
Resolving Dependencies
--> Running transaction check
---> Package kubeadm.x86_64 0:1.21.5-0 will be installed
--> Processing Dependency: kubernetes-cni >= 0.8.6 for package: kubeadm-1.21.5-0.x86_64
--> Processing Dependency: cri-tools >= 1.13.0 for package: kubeadm-1.21.5-0.x86_64
---> Package kubectl.x86_64 0:1.21.5-0 will be installed
---> Package kubelet.x86_64 0:1.21.5-0 will be installed
--> Processing Dependency: socat for package: kubelet-1.21.5-0.x86_64
--> Processing Dependency: conntrack for package: kubelet-1.21.5-0.x86_64
--> Running transaction check
---> Package conntrack-tools.x86_64 0:1.4.4-7.el7 will be installed
--> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.1)(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
--> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.0)(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
--> Processing Dependency: libnetfilter_cthelper.so.0(LIBNETFILTER_CTHELPER_1.0)(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
--> Processing Dependency: libnetfilter_queue.so.1()(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
--> Processing Dependency: libnetfilter_cttimeout.so.1()(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
--> Processing Dependency: libnetfilter_cthelper.so.0()(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
---> Package cri-tools.x86_64 0:1.19.0-0 will be installed
---> Package kubernetes-cni.x86_64 0:0.8.7-0 will be installed
---> Package socat.x86_64 0:1.7.3.2-2.el7 will be installed
--> Running transaction check
---> Package libnetfilter_cthelper.x86_64 0:1.0.0-11.el7 will be installed
---> Package libnetfilter_cttimeout.x86_64 0:1.0.0-7.el7 will be installed
---> Package libnetfilter_queue.x86_64 0:1.0.2-2.el7_2 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
kubeadm x86_64 1.21.5-0 kubernetes-community 9.1 M
kubectl x86_64 1.21.5-0 kubernetes-community 9.6 M
kubelet x86_64 1.21.5-0 kubernetes-community 20 M
Installing for dependencies:
conntrack-tools x86_64 1.4.4-7.el7 base 187 k
cri-tools x86_64 1.19.0-0 kubernetes-community 5.7 M
kubernetes-cni x86_64 0.8.7-0 kubernetes-community 19 M
libnetfilter_cthelper x86_64 1.0.0-11.el7 base 18 k
libnetfilter_cttimeout x86_64 1.0.0-7.el7 base 18 k
libnetfilter_queue x86_64 1.0.2-2.el7_2 base 23 k
socat x86_64 1.7.3.2-2.el7 base 290 k
Transaction Summary
================================================================================
Install 3 Packages (+7 Dependent packages)
Total download size: 64 M
Installed size: 280 M
Downloading packages:
--------------------------------------------------------------------------------
Total 86 MB/s | 64 MB 00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : socat-1.7.3.2-2.el7.x86_64 1/10
Installing : libnetfilter_cttimeout-1.0.0-7.el7.x86_64 2/10
Installing : libnetfilter_queue-1.0.2-2.el7_2.x86_64 3/10
Installing : libnetfilter_cthelper-1.0.0-11.el7.x86_64 4/10
Installing : conntrack-tools-1.4.4-7.el7.x86_64 5/10
Installing : kubernetes-cni-0.8.7-0.x86_64 6/10
Installing : kubelet-1.21.5-0.x86_64 7/10
Installing : kubectl-1.21.5-0.x86_64 8/10
Installing : cri-tools-1.19.0-0.x86_64 9/10
Installing : kubeadm-1.21.5-0.x86_64 10/10
Verifying : cri-tools-1.19.0-0.x86_64 1/10
Verifying : kubectl-1.21.5-0.x86_64 2/10
Verifying : libnetfilter_cthelper-1.0.0-11.el7.x86_64 3/10
Verifying : conntrack-tools-1.4.4-7.el7.x86_64 4/10
Verifying : kubernetes-cni-0.8.7-0.x86_64 5/10
Verifying : libnetfilter_queue-1.0.2-2.el7_2.x86_64 6/10
Verifying : kubelet-1.21.5-0.x86_64 7/10
Verifying : libnetfilter_cttimeout-1.0.0-7.el7.x86_64 8/10
Verifying : socat-1.7.3.2-2.el7.x86_64 9/10
Verifying : kubeadm-1.21.5-0.x86_64 10/10
Installed:
kubeadm.x86_64 0:1.21.5-0 kubectl.x86_64 0:1.21.5-0 kubelet.x86_64 0:1.21.5-0
Dependency Installed:
conntrack-tools.x86_64 0:1.4.4-7.el7
cri-tools.x86_64 0:1.19.0-0
kubernetes-cni.x86_64 0:0.8.7-0
libnetfilter_cthelper.x86_64 0:1.0.0-11.el7
libnetfilter_cttimeout.x86_64 0:1.0.0-7.el7
libnetfilter_queue.x86_64 0:1.0.2-2.el7_2
socat.x86_64 0:1.7.3.2-2.el7
Complete!
yum version lock: 'kubelet-1.21.5-0 kubeadm-1.21.5-0 kubectl-1.21.5-0':
________________________________________________________________________________________________________________________
Loaded plugins: fastestmirror, remove-with-leaves, versionlock
Adding versionlock on: 0:kubectl-1.21.5-0
Adding versionlock on: 0:kubeadm-1.21.5-0
Adding versionlock on: 0:kubelet-1.21.5-0
versionlock added: 3
________________________________________________________________________________________________________________________
--------------------------------------------- Verifying docker cgroup ---------------------------------------------
WARNING: API is accessible on http://0.0.0.0:4243 without encryption.
Access to the remote API is equivalent to root access on the host. Refer
to the 'Docker daemon attack surface' section in the documentation for
more information: https://docs.docker.com/go/attack-surface/
Passed: Docker cgroup driver: ' Cgroup Driver: systemd
Cgroup Version: 1'
ip_config_check:
________________________________________________________________________________________________________________________
1
________________________________________________________________________________________________________________________
Updating /etc/sysctl.d/k8s.conf with /opt/csap/csap-platform/working/kubelet/configuration/linux-system/etc-sysctl.d-k8s.conf. /etc/sysctl.d/k8s.conf is being backed up
------------------------------------- Reload kernel settings 'sysctl --system' -------------------------------------
* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
fs.file-max = 6815744
fs.suid_dumpable = 1
net.ipv4.ip_local_port_range = 9000 65500
net.ipv4.ip_local_reserved_ports = 10000-11000,30000-32767
net.core.rmem_default = 4194304
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586
kernel.msgmni = 2878
kernel.msgmax = 8192
kernel.msgmnb = 65536
kernel.sem = 250 32000 100 142
kernel.shmmni = 4096
kernel.shmall = 5368709120
kernel.shmmax = 21474836480
kernel.sysrq = 1
fs.aio-max-nr = 3145728
vm.min_free_kbytes = 51200
vm.swappiness = 10
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
* Applying /etc/sysctl.conf ...
fs.file-max = 6815744
fs.suid_dumpable = 1
net.ipv4.ip_local_port_range = 9000 65500
net.ipv4.ip_local_reserved_ports = 10000-11000,30000-32767
net.core.rmem_default = 4194304
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586
kernel.msgmni = 2878
kernel.msgmax = 8192
kernel.msgmnb = 65536
kernel.sem = 250 32000 100 142
kernel.shmmni = 4096
kernel.shmall = 5368709120
kernel.shmmax = 21474836480
kernel.sysrq = 1
fs.aio-max-nr = 3145728
vm.min_free_kbytes = 51200
vm.swappiness = 10
------------------------------- reloading system manager: 'systemctl daemon-reload' -------------------------------
________________________________________________________________________________________________________________________
Enable transparent masquerading and facilitate Virtual Extensible LAN (VxLAN) traffic for communication between Kubernetes pods across the cluster
________________________________________________________________________________________________________________________
-------------------------------- Pulling kubernetes images using version: 'v1.21.5' --------------------------------
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.21.5
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.21.5
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.21.5
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.21.5
[config/images] Pulled k8s.gcr.io/pause:3.4.1
[config/images] Pulled k8s.gcr.io/etcd:3.4.13-0
[config/images] Pulled k8s.gcr.io/coredns/coredns:v1.8.0
________________________________________________________________________________________________________________________
Firewall rules update: only done if firewalld enabled
________________________________________________________________________________________________________________________
FirewallD is not running
FirewallD is not running
FirewallD is not running
FirewallD is not running
FirewallD is not running
FirewallD is not running
FirewallD is not running
________________________________________________________________________________________________________________________
Host is the primary master. For security, update clusterToken: 'dzv7mr.yscwvat5b1y7ampq'
________________________________________________________________________________________________________________________
*
**
***
****
***** Wed Dec 8 12:00:32 EST 2021 kubeadm reset
****
***
**
*
------------------------------ perform_kubeadm_reset() cleaning up previous installs ------------------------------
[preflight] Running pre-flight checks
W1208 12:00:38.067071 679 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
----------------------------------------- kubernetes systemd cleanup start -----------------------------------------
systemctl listing filtered by pod: pre cleanup:
________________________________________________________________________________________________________________________
________________________________________________________________________________________________________________________
systemctl list-units filtered by pod: post cleanup:
________________________________________________________________________________________________________________________
________________________________________________________________________________________________________________________
Running mount in case kubeadm umounts kubelet due to bug
mount: /dev/mapper/vg0-var_lib_kubelet is already mounted or /var/lib/kubelet busy
/dev/mapper/vg0-var_lib_kubelet is already mounted on /var/lib/kubelet
Checking test nfs: check if exists: /mnt/nfsshare
________________________________________________________________________________________________________________________
Installing a new cluster. Add the following file to trigger backup recovery:'/mnt/CSAP_DEV01_NFS/kubernetes-backups/load-on-install-yes'
________________________________________________________________________________________________________________________
________________________________________________________________________________________________________________________
wait_for_port_free kubernetes ports: pre-kubeadm - waiting for filter: '9099' to not be found in 'ss --numeric --processes' (--listen)
________________________________________________________________________________________________________________________
attempt 1 of 10: found: '0'
No remaining instances of '9099'
________________________________________________________________________________________________________________________
wait_for_port_free kubernetes ports: pre-kubeadm - waiting for filter: '10250' to not be found in 'ss --numeric --processes' (--listen)
________________________________________________________________________________________________________________________
attempt 1 of 10: found: '0'
No remaining instances of '10250'
________________________________________________________________________________________________________________________
wait_for_port_free kubernetes ports: pre-kubeadm - waiting for filter: '10251' to not be found in 'ss --numeric --processes' (--listen)
________________________________________________________________________________________________________________________
attempt 1 of 10: found: '0'
No remaining instances of '10251'
________________________________________________________________________________________________________________________
wait_for_port_free kubernetes ports: pre-kubeadm - waiting for filter: '10252' to not be found in 'ss --numeric --processes' (--listen)
________________________________________________________________________________________________________________________
attempt 1 of 10: found: '0'
No remaining instances of '10252'
________________________________________________________________________________________________________________________
wait_for_port_free kubernetes ports: pre-kubeadm - waiting for filter: '10253' to not be found in 'ss --numeric --processes' (--listen)
________________________________________________________________________________________________________________________
attempt 1 of 10: found: '0'
No remaining instances of '10253'
________________________________________________________________________________________________________________________
wait_for_port_free kubernetes ports: pre-kubeadm - waiting for filter: '10254' to not be found in 'ss --numeric --processes' (--listen)
________________________________________________________________________________________________________________________
attempt 1 of 10: found: '0'
No remaining instances of '10254'
________________________________________________________________________________________________________________________
wait_for_port_free kubernetes ports: pre-kubeadm - waiting for filter: '10255' to not be found in 'ss --numeric --processes' (--listen)
________________________________________________________________________________________________________________________
attempt 1 of 10: found: '0'
No remaining instances of '10255'
________________________________________________________________________________________________________________________
wait_for_port_free kubernetes ports: pre-kubeadm - waiting for filter: '10256' to not be found in 'ss --numeric --processes' (--listen)
________________________________________________________________________________________________________________________
attempt 1 of 10: found: '0'
No remaining instances of '10256'
________________________________________________________________________________________________________________________
wait_for_port_free kubernetes ports: pre-kubeadm - waiting for filter: '10257' to not be found in 'ss --numeric --processes' (--listen)
________________________________________________________________________________________________________________________
attempt 1 of 10: found: '0'
No remaining instances of '10257'
________________________________________________________________________________________________________________________
wait_for_port_free kubernetes ports: pre-kubeadm - waiting for filter: '10258' to not be found in 'ss --numeric --processes' (--listen)
________________________________________________________________________________________________________________________
attempt 1 of 10: found: '0'
No remaining instances of '10258'
________________________________________________________________________________________________________________________
wait_for_port_free kubernetes ports: pre-kubeadm - waiting for filter: '10259' to not be found in 'ss --numeric --processes' (--listen)
________________________________________________________________________________________________________________________
attempt 1 of 10: found: '0'
No remaining instances of '10259'
________________________________________________________________________________________________________________________
wait_for_port_free kubernetes ports: pre-kubeadm - waiting for filter: '30080' to not be found in 'ss --numeric --processes' (--listen)
________________________________________________________________________________________________________________________
attempt 1 of 10: found: '0'
No remaining instances of '30080'
*
**
***
****
***** Wed Dec 8 12:00:38 EST 2021 kubeadm init
****
***
**
*
kubeadmParameters:
________________________________________________________________________________________________________________________
'init --config /opt/csap/csap-platform/working/kubelet/configuration/kubeadm/kubeadm-Dec-08-11-59.yaml --ignore-preflight-errors=SystemVerification,DirAvailable--var-lib-etcd'
________________________________________________________________________________________________________________________
I1208 12:00:38.428078 944 initconfiguration.go:246] loading configuration from "/opt/csap/csap-platform/working/kubelet/configuration/kubeadm/kubeadm-Dec-08-11-59.yaml"
I1208 12:00:38.431208 944 initconfiguration.go:115] detected and using CRI socket: /var/run/dockershim.sock
I1208 12:00:38.431369 944 interface.go:431] Looking for default routes with IPv4 addresses
I1208 12:00:38.431376 944 interface.go:436] Default route transits interface "ens192"
I1208 12:00:38.431526 944 interface.go:208] Interface ens192 is up
I1208 12:00:38.431575 944 interface.go:256] Interface "ens192" has 2 addresses :[10.22.14.244/22 fe80::250:56ff:fe95:a7ef/64].
I1208 12:00:38.431592 944 interface.go:223] Checking addr 10.22.14.244/22.
I1208 12:00:38.431596 944 interface.go:230] IP found 10.22.14.244
I1208 12:00:38.431605 944 interface.go:262] Found valid IPv4 address 10.22.14.244 for interface "ens192".
I1208 12:00:38.431610 944 interface.go:442] Found active IP 10.22.14.244
[init] Using Kubernetes version: v1.21.5
[preflight] Running pre-flight checks
I1208 12:00:38.549286 944 checks.go:582] validating Kubernetes and kubeadm version
I1208 12:00:38.549317 944 checks.go:167] validating if the firewall is enabled and active
I1208 12:00:38.556711 944 checks.go:202] validating availability of port 6443
I1208 12:00:38.557794 944 checks.go:202] validating availability of port 10259
I1208 12:00:38.557827 944 checks.go:202] validating availability of port 10257
I1208 12:00:38.557852 944 checks.go:287] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I1208 12:00:38.557869 944 checks.go:287] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I1208 12:00:38.557883 944 checks.go:287] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I1208 12:00:38.557892 944 checks.go:287] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I1208 12:00:38.557903 944 checks.go:437] validating if the connectivity type is via proxy or direct
I1208 12:00:38.557939 944 checks.go:476] validating http connectivity to first IP address in the CIDR
I1208 12:00:38.557957 944 checks.go:476] validating http connectivity to first IP address in the CIDR
I1208 12:00:38.557969 944 checks.go:103] validating the container runtime
I1208 12:00:38.660274 944 checks.go:129] validating if the "docker" service is enabled and active
I1208 12:00:38.775962 944 checks.go:336] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I1208 12:00:38.776035 944 checks.go:336] validating the contents of file /proc/sys/net/ipv4/ip_forward
I1208 12:00:38.776062 944 checks.go:654] validating whether swap is enabled or not
I1208 12:00:38.776096 944 checks.go:377] validating the presence of executable conntrack
I1208 12:00:38.776600 944 checks.go:377] validating the presence of executable ip
I1208 12:00:38.776788 944 checks.go:377] validating the presence of executable iptables
I1208 12:00:38.776836 944 checks.go:377] validating the presence of executable mount
I1208 12:00:38.776860 944 checks.go:377] validating the presence of executable nsenter
I1208 12:00:38.777064 944 checks.go:377] validating the presence of executable ebtables
I1208 12:00:38.777108 944 checks.go:377] validating the presence of executable ethtool
I1208 12:00:38.777147 944 checks.go:377] validating the presence of executable socat
I1208 12:00:38.777168 944 checks.go:377] validating the presence of executable tc
I1208 12:00:38.777230 944 checks.go:377] validating the presence of executable touch
I1208 12:00:38.777268 944 checks.go:525] running all checks
I1208 12:00:38.886085 944 checks.go:408] checking whether the given node name is valid and reachable using net.LookupHost
I1208 12:00:38.886253 944 checks.go:623] validating kubelet version
I1208 12:00:38.948762 944 checks.go:129] validating if the "kubelet" service is enabled and active
I1208 12:00:38.955975 944 checks.go:202] validating availability of port 10250
I1208 12:00:38.956043 944 checks.go:202] validating availability of port 2379
I1208 12:00:38.956127 944 checks.go:202] validating availability of port 2380
I1208 12:00:38.956156 944 checks.go:250] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I1208 12:00:38.988976 944 checks.go:844] image exists: k8s.gcr.io/kube-apiserver:v1.21.5
I1208 12:00:39.020575 944 checks.go:844] image exists: k8s.gcr.io/kube-controller-manager:v1.21.5
I1208 12:00:39.052844 944 checks.go:844] image exists: k8s.gcr.io/kube-scheduler:v1.21.5
I1208 12:00:39.086499 944 checks.go:844] image exists: k8s.gcr.io/kube-proxy:v1.21.5
I1208 12:00:39.122098 944 checks.go:844] image exists: k8s.gcr.io/pause:3.4.1
I1208 12:00:39.160012 944 checks.go:844] image exists: k8s.gcr.io/etcd:3.4.13-0
I1208 12:00:39.198103 944 checks.go:844] image exists: k8s.gcr.io/coredns/coredns:v1.8.0
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I1208 12:00:39.198199 944 certs.go:110] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I1208 12:00:39.356276 944 certs.go:487] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [csap-dev04.lab.sensus.net kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.22.14.244]
[certs] Generating "apiserver-kubelet-client" certificate and key
I1208 12:00:39.681309 944 certs.go:110] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I1208 12:00:39.973125 944 certs.go:487] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I1208 12:00:40.109795 944 certs.go:110] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I1208 12:00:40.391924 944 certs.go:487] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [csap-dev04.lab.sensus.net localhost] and IPs [10.22.14.244 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [csap-dev04.lab.sensus.net localhost] and IPs [10.22.14.244 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I1208 12:00:41.470351 944 certs.go:76] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1208 12:00:41.781191 944 kubeconfig.go:101] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I1208 12:00:42.001760 944 kubeconfig.go:101] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I1208 12:00:42.292652 944 kubeconfig.go:101] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1208 12:00:42.720736 944 kubeconfig.go:101] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
I1208 12:00:42.912761 944 kubelet.go:63] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I1208 12:00:43.025783 944 manifests.go:96] [control-plane] getting StaticPodSpecs
I1208 12:00:43.026317 944 certs.go:487] validating certificate period for CA certificate
I1208 12:00:43.026455 944 manifests.go:109] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I1208 12:00:43.026462 944 manifests.go:109] [control-plane] adding volume "etc-pki" for component "kube-apiserver"
I1208 12:00:43.026467 944 manifests.go:109] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I1208 12:00:43.034379 944 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I1208 12:00:43.034414 944 manifests.go:96] [control-plane] getting StaticPodSpecs
I1208 12:00:43.034765 944 manifests.go:109] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I1208 12:00:43.034778 944 manifests.go:109] [control-plane] adding volume "etc-pki" for component "kube-controller-manager"
I1208 12:00:43.034784 944 manifests.go:109] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I1208 12:00:43.034789 944 manifests.go:109] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I1208 12:00:43.034794 944 manifests.go:109] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I1208 12:00:43.035797 944 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I1208 12:00:43.035817 944 manifests.go:96] [control-plane] getting StaticPodSpecs
I1208 12:00:43.036127 944 manifests.go:109] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I1208 12:00:43.037297 944 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1208 12:00:43.038920 944 local.go:74] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I1208 12:00:43.038941 944 waitcontrolplane.go:87] [wait-control-plane] Waiting for the API server to be healthy
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 14.502436 seconds
I1208 12:00:57.543447 944 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1208 12:00:57.551950 944 uploadconfig.go:122] [upload-config] Uploading the kubelet component config to a ConfigMap
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
I1208 12:00:57.557314 944 uploadconfig.go:127] [upload-config] Preserving the CRISocket information for the control-plane node
I1208 12:00:57.557331 944 patchnode.go:30] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "csap-dev04.lab.sensus.net" as an annotation
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node csap-dev04.lab.sensus.net as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node csap-dev04.lab.sensus.net as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 584t76.b0b7c7r75rbc0ml0
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1208 12:00:58.593893 944 clusterinfo.go:45] [bootstrap-token] loading admin kubeconfig
I1208 12:00:58.594304 944 clusterinfo.go:56] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig
I1208 12:00:58.594517 944 clusterinfo.go:68] [bootstrap-token] creating/updating ConfigMap in kube-public namespace
I1208 12:00:58.596288 944 clusterinfo.go:82] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace
I1208 12:00:58.599963 944 kubeletfinalize.go:88] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem"
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1208 12:00:58.600876 944 kubeletfinalize.go:132] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation
[addons] Applied essential addon: CoreDNS
I1208 12:00:58.981075 944 request.go:600] Waited for 191.324762ms due to client-side throttling, not priority and fairness, request: POST:https://10.22.14.244:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.22.14.244:6443 --token 584t76.b0b7c7r75rbc0ml0 \
--discovery-token-ca-cert-hash sha256:fd51351d5e7b474fe8f9ee5910d920de213a62dd96a00537d6aabfa9714bdd9b
------------------------------------ Creating master credentials: '/root/.kube' ------------------------------------
mkdir: created directory ‘/root/.kube’
________________________________________________________________________________________________________________________
network install: calico
________________________________________________________________________________________________________________________
updating variable: CALICO_IP_METHOD: 'interface=ens192'
replace_all_in_file: Replacing 2 occurences of '$ip_detection' with 'interface=ens192' in '/opt/csap/csap-platform/working/kubelet/configuration/network/calico.yaml'
updating variable: veth_mtu: '1440'
replace_all_in_file: Replacing 1 occurences of '$veth_mtu' with '1440' in '/opt/csap/csap-platform/working/kubelet/configuration/network/calico.yaml'
updating variable: FELIX_IPTABLESBACKEND: 'auto'
replace_all_in_file: Replacing 1 occurences of '$ipTablesBackend' with 'auto' in '/opt/csap/csap-platform/working/kubelet/configuration/network/calico.yaml'
________________________________________________________________________________________________________________________
Adding: networking, refer to: https://docs.projectcalico.org/archive/v3.18/getting-started/kubernetes/
'/opt/csap/csap-platform/working/kubelet/configuration/network/calico.yaml'
________________________________________________________________________________________________________________________
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created
________________________________________________________________________________________________________________________
Wait for calico ready on host: 'csap-dev04'
________________________________________________________________________________________________________________________
------------------------------------------------- attempt 1 of 100 -------------------------------------------------
------------------------------------------------- attempt 2 of 100 -------------------------------------------------
------------------------------------------------- attempt 3 of 100 -------------------------------------------------
calicoPodOnCurrentHost: 'calico-node-jxl5q'
NAME READY STATUS RESTARTS AGE
calico-node-jxl5q 0/1 Init:0/3 0 2s
------------------------------------------------- attempt 4 of 100 -------------------------------------------------
calicoPodOnCurrentHost: 'calico-node-jxl5q'
NAME READY STATUS RESTARTS AGE
calico-node-jxl5q 0/1 Init:0/3 0 7s
------------------------------------------------- attempt 5 of 100 -------------------------------------------------
calicoPodOnCurrentHost: 'calico-node-jxl5q'
NAME READY STATUS RESTARTS AGE
calico-node-jxl5q 0/1 Init:2/3 0 12s
------------------------------------------------- attempt 6 of 100 -------------------------------------------------
calicoPodOnCurrentHost: 'calico-node-jxl5q'
NAME READY STATUS RESTARTS AGE
calico-node-jxl5q 0/1 Init:2/3 0 17s
------------------------------------------------- attempt 7 of 100 -------------------------------------------------
calicoPodOnCurrentHost: 'calico-node-jxl5q'
NAME READY STATUS RESTARTS AGE
calico-node-jxl5q 0/1 PodInitializing 0 22s
------------------------------------------------- attempt 8 of 100 -------------------------------------------------
calicoPodOnCurrentHost: 'calico-node-jxl5q'
NAME READY STATUS RESTARTS AGE
calico-node-jxl5q 0/1 PodInitializing 0 27s
------------------------------------------------- attempt 9 of 100 -------------------------------------------------
calicoPodOnCurrentHost: 'calico-node-jxl5q'
NAME READY STATUS RESTARTS AGE
calico-node-jxl5q 1/1 Running 0 33s
________________________________________________________________________________________________________________________
Polling for HealthReport{Live:true, Ready:true}
________________________________________________________________________________________________________________________
------------------------ attempt 1 of 100, last 10 lines of logs (but checking --since=1h) ------------------------
bird: device1: State changed to feed
bird: direct1: Starting
bird: direct1: Connected to table master
bird: direct1: State changed to feed
bird: Graceful restart started
bird: Graceful restart done
bird: Started
bird: device1: State changed to up
bird: direct1: State changed to up
2021-12-08 17:01:45.590 [INFO][79] felix/health.go 196: Overall health status changed newStatus=&health.HealthReport{Live:true, Ready:true}
________________________________________________________________________________________________________________________
Assuming calico is initialized successfully - found 'HealthReport{Live:true, Ready:true}'
________________________________________________________________________________________________________________________
kube-system calico-kube-controllers-67bb5696f5-sk9c4 0/1 ContainerCreating 0 35s
kube-system calico-node-jxl5q 1/1 Running 0 35s
________________________________________________________________________________________________________________________
Waiting for node: 'csap-dev04.lab.sensus.net' to be in ready state
________________________________________________________________________________________________________________________
NAME STATUS ROLES AGE VERSION
csap-dev04.lab.sensus.net Ready control-plane,master 52s v1.21.5
---------- attempt 1 of 100: \t csap-dev04.lab.sensus.net Ready control-plane,master 57s v1.21.5 ----------
NAME STATUS ROLES AGE VERSION
csap-dev04.lab.sensus.net Ready control-plane,master 57s v1.21.5
________________________________________________________________________________________________________________________
Wait for calico ready on host: 'csap-dev04'
________________________________________________________________________________________________________________________
------------------------------------------------- attempt 1 of 100 -------------------------------------------------
calicoPodOnCurrentHost: 'calico-node-jxl5q'
NAME READY STATUS RESTARTS AGE
calico-node-jxl5q 1/1 Running 0 45s
________________________________________________________________________________________________________________________
Polling for HealthReport{Live:true, Ready:true}
________________________________________________________________________________________________________________________
------------------------ attempt 1 of 100, last 10 lines of logs (but checking --since=1h) ------------------------
2021-12-08 17:01:48.509 [INFO][79] felix/int_dataplane.go 1430: Received *proto.WorkloadEndpointUpdate update from calculation graph msg=id:<orchestrator_id:"k8s" workload_id:"kube-system/calico-kube-controllers-67bb5696f5-sk9c4" endpoint_id:"eth0" > endpoint:<state:"active" name:"cali26ae9c02382" profile_ids:"kns.kube-system" profile_ids:"ksa.kube-system.calico-kube-controllers" ipv4_nets:"192.168.203.65/32" >
2021-12-08 17:01:48.509 [INFO][79] felix/endpoint_mgr.go 583: Updating per-endpoint chains. id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/calico-kube-controllers-67bb5696f5-sk9c4", EndpointId:"eth0"}
2021-12-08 17:01:48.509 [INFO][79] felix/table.go 506: Queueing update of chain. chainName="cali-tw-cali26ae9c02382" ipVersion=0x4 table="filter"
2021-12-08 17:01:48.509 [INFO][79] felix/table.go 506: Queueing update of chain. chainName="cali-fw-cali26ae9c02382" ipVersion=0x4 table="filter"
2021-12-08 17:01:48.509 [INFO][79] felix/endpoint_mgr.go 614: Updating endpoint routes. id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/calico-kube-controllers-67bb5696f5-sk9c4", EndpointId:"eth0"}
2021-12-08 17:01:48.509 [INFO][79] felix/endpoint_mgr.go 1145: Applying /proc/sys configuration to interface. ifaceName="cali26ae9c02382"
2021-12-08 17:01:48.509 [INFO][79] felix/endpoint_mgr.go 476: Re-evaluated workload endpoint status adminUp=true failed=false known=true operUp=true status="up" workloadEndpointID=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/calico-kube-controllers-67bb5696f5-sk9c4", EndpointId:"eth0"}
2021-12-08 17:01:48.509 [INFO][79] felix/status_combiner.go 58: Storing endpoint status update ipVersion=0x4 status="up" workload=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/calico-kube-controllers-67bb5696f5-sk9c4", EndpointId:"eth0"}
2021-12-08 17:01:48.516 [INFO][79] felix/status_combiner.go 81: Endpoint up for at least one IP version id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/calico-kube-controllers-67bb5696f5-sk9c4", EndpointId:"eth0"} ipVersion=0x4 status="up"
2021-12-08 17:01:48.516 [INFO][79] felix/status_combiner.go 98: Reporting combined status. id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"kube-system/calico-kube-controllers-67bb5696f5-sk9c4", EndpointId:"eth0"} status="up"
________________________________________________________________________________________________________________________
Assuming calico is initialized successfully - found 'HealthReport{Live:true, Ready:true}'
________________________________________________________________________________________________________________________
________________________________________________________________________________________________________________________
install() - completed
________________________________________________________________________________________________________________________
/var/log/messages: linked: link-var-log-messages
________________________________________________________________________________________________________________________
Creating environment file for kubeadm commands
________________________________________________________________________________________________________________________
removed ‘/opt/csap/csap-platform/working/kubelet/install-variables.sh’
append_file: Note: specified targetFile '/opt/csap/csap-platform/working/kubelet/install-variables.sh', does not exist, creating
append_file() : line: '# generated file' to file: '/opt/csap/csap-platform/working/kubelet/install-variables.sh'
________________________________________________________________________________________________________________________
'/opt/csap/csap-platform/working/kubelet/install-variables.sh':
# generated file
#
export csapUser=csap
export kubernetesAllInOne=false
export kubernetesStorage=/var/lib/kubelet
export kubeletExtraArgs=""
export calico_ip_method="interface=ens192"
export veth_mtu="1440"
export CSAP_FOLDER=/opt/csap/csap-platform
export AGENT_ENDPOINT=":8011"
export csapName=kubelet
export csapProcessId=kubelet
export csapWorkingDir=/opt/csap/csap-platform/working/kubelet
export csapPrimaryPort=8014
export masterBackupFolder=/mnt/CSAP_DEV01_NFS/kubernetes-backups
export imageRepository="none"
export kubeadmParameters="init --config /opt/csap/csap-platform/working/kubelet/configuration/kubeadm/kubeadm-Dec-08-11-59.yaml --ignore-preflight-errors=SystemVerification,DirAvailable--var-lib-etcd"
export kubernetesMasters="csap-dev04"
export kubernetesMasterDns=not-specified
export k8Version=1.21.5-0
export k8ImageVersion=v1.21.5
export isForceIpForApiServer=false
export kubernetesRepo=http://media.lab.sensus.net/media/third_party/kubernetes/kubernetes-el7.repo
export clusterToken="584t76.b0b7c7r75rbc0ml0"
________________________________________________________________________________________________________________________
12/08/2021 12:02:01 [email protected] Running: csap-deploy-as-root.sh
________________________________________________________________________________________________________________________
start() - Using systemctl to start kubelet
________________________________________________________________________________________________________________________
________________________________________________________________________________________________________________________
systemctl status kubelet.service
________________________________________________________________________________________________________________________
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Wed 2021-12-08 12:00:58 EST; 1min 4s ago
Docs: https://kubernetes.io/docs/
Main PID: 2665 (kubelet)
CGroup: /system.slice/kubelet.service
└─2665 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.4.1 --root-dir=/var/lib/kubelet
Dec 08 12:01:42 csap-dev04.lab.sensus.net kubelet[2665]: 2021-12-08 12:01:41.960 [INFO][5584] ipam_plugin.go 276: Calico CNI IPAM assigned addresses IPv4=[192.168.203.67/26] IPv6=[] ContainerID="5aa905508f7162b156b8e901fcfb62a24c5d845deaaa940e6b7725975df5e416" HandleID="k8s-pod-network.5aa905508f7162b156b8e901fcfb62a24c5d845deaaa940e6b7725975df5e416" Workload="csap--dev04.lab.sensus.net-k8s-coredns--558bd4d5db--hwwsw-eth0"
Dec 08 12:01:42 csap-dev04.lab.sensus.net kubelet[2665]: 2021-12-08 12:01:41.964 [INFO][5550] k8s.go 370: Populated endpoint ContainerID="5aa905508f7162b156b8e901fcfb62a24c5d845deaaa940e6b7725975df5e416" Namespace="kube-system" Pod="coredns-558bd4d5db-hwwsw" WorkloadEndpoint="csap--dev04.lab.sensus.net-k8s-coredns--558bd4d5db--hwwsw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"csap--dev04.lab.sensus.net-k8s-coredns--558bd4d5db--hwwsw-eth0", GenerateName:"coredns-558bd4d5db-", Namespace:"kube-system", SelfLink:"", UID:"1fe58dd6-a91d-4745-8dbc-52d40d2d45e5", ResourceVersion:"607", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63774579673, loc:(*time.Location)(0x26ec240)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"558bd4d5db", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"csap-dev04.lab.sensus.net", ContainerID:"", Pod:"coredns-558bd4d5db-hwwsw", Endpoint:"eth0", IPNetworks:[]string{"192.168.203.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35db140e159", MAC:"", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35}, v3.EndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35}, v3.EndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1}}}}
Dec 08 12:01:42 csap-dev04.lab.sensus.net kubelet[2665]: 2021-12-08 12:01:41.964 [INFO][5550] k8s.go 371: Calico CNI using IPs: [192.168.203.67/32] ContainerID="5aa905508f7162b156b8e901fcfb62a24c5d845deaaa940e6b7725975df5e416" Namespace="kube-system" Pod="coredns-558bd4d5db-hwwsw" WorkloadEndpoint="csap--dev04.lab.sensus.net-k8s-coredns--558bd4d5db--hwwsw-eth0"
Dec 08 12:01:42 csap-dev04.lab.sensus.net kubelet[2665]: 2021-12-08 12:01:41.964 [INFO][5550] dataplane_linux.go 66: Setting the host side veth name to cali35db140e159 ContainerID="5aa905508f7162b156b8e901fcfb62a24c5d845deaaa940e6b7725975df5e416" Namespace="kube-system" Pod="coredns-558bd4d5db-hwwsw" WorkloadEndpoint="csap--dev04.lab.sensus.net-k8s-coredns--558bd4d5db--hwwsw-eth0"
Dec 08 12:01:42 csap-dev04.lab.sensus.net kubelet[2665]: 2021-12-08 12:01:41.966 [INFO][5550] dataplane_linux.go 420: Disabling IPv4 forwarding ContainerID="5aa905508f7162b156b8e901fcfb62a24c5d845deaaa940e6b7725975df5e416" Namespace="kube-system" Pod="coredns-558bd4d5db-hwwsw" WorkloadEndpoint="csap--dev04.lab.sensus.net-k8s-coredns--558bd4d5db--hwwsw-eth0"
Dec 08 12:01:42 csap-dev04.lab.sensus.net kubelet[2665]: 2021-12-08 12:01:41.985 [INFO][5550] k8s.go 398: Added Mac, interface name, and active container ID to endpoint ContainerID="5aa905508f7162b156b8e901fcfb62a24c5d845deaaa940e6b7725975df5e416" Namespace="kube-system" Pod="coredns-558bd4d5db-hwwsw" WorkloadEndpoint="csap--dev04.lab.sensus.net-k8s-coredns--558bd4d5db--hwwsw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"csap--dev04.lab.sensus.net-k8s-coredns--558bd4d5db--hwwsw-eth0", GenerateName:"coredns-558bd4d5db-", Namespace:"kube-system", SelfLink:"", UID:"1fe58dd6-a91d-4745-8dbc-52d40d2d45e5", ResourceVersion:"607", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63774579673, loc:(*time.Location)(0x26ec240)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"558bd4d5db", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"csap-dev04.lab.sensus.net", ContainerID:"5aa905508f7162b156b8e901fcfb62a24c5d845deaaa940e6b7725975df5e416", Pod:"coredns-558bd4d5db-hwwsw", Endpoint:"eth0", IPNetworks:[]string{"192.168.203.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35db140e159", MAC:"de:8c:28:67:f7:fd", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35}, v3.EndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35}, v3.EndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1}}}}
Dec 08 12:01:42 csap-dev04.lab.sensus.net kubelet[2665]: 2021-12-08 12:01:41.993 [INFO][5550] k8s.go 472: Wrote updated endpoint to datastore ContainerID="5aa905508f7162b156b8e901fcfb62a24c5d845deaaa940e6b7725975df5e416" Namespace="kube-system" Pod="coredns-558bd4d5db-hwwsw" WorkloadEndpoint="csap--dev04.lab.sensus.net-k8s-coredns--558bd4d5db--hwwsw-eth0"
Dec 08 12:01:42 csap-dev04.lab.sensus.net kubelet[2665]: E1208 12:01:42.055477 2665 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: c126afc0b08b8e849530138d1ba140bcfe86ada8d0359766cdf07cb77d07251f" containerID="c126afc0b08b8e849530138d1ba140bcfe86ada8d0359766cdf07cb77d07251f"
Dec 08 12:01:42 csap-dev04.lab.sensus.net kubelet[2665]: E1208 12:01:42.055523 2665 kuberuntime_manager.go:1018] "getPodContainerStatuses for pod failed" err="rpc error: code = Unknown desc = Error: No such container: c126afc0b08b8e849530138d1ba140bcfe86ada8d0359766cdf07cb77d07251f" pod="kube-system/coredns-558bd4d5db-hwwsw"
Dec 08 12:01:45 csap-dev04.lab.sensus.net kubelet[2665]: E1208 12:01:45.719596 2665 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/7f702dc6-6a51-4b9c-8f6c-a0e4ba350a82/etc-hosts with error exit status 1" pod="kube-system/calico-kube-controllers-67bb5696f5-sk9c4"
________________________________________________________________________________________________________________________
Updating 'csap' kubernetes credentials: '/opt/csap/.kube'
________________________________________________________________________________________________________________________
mkdir: created directory ‘/opt/csap/.kube’
________________________________________________________________________________________________________________________
api_service_start(): Creating configuration shortcuts in /opt/csap/csap-platform/working/kubelet/os-configuration-folders
________________________________________________________________________________________________________________________
/var/lib/kubelet: linked: link-var-lib-kubelet
/var/lib/docker: linked: link-var-lib-docker
/etc/kubernetes: linked: link-etc-kubernetes
/etc/docker: linked: link-etc-docker
/etc/sysctl.d/k8s.conf: linked: link-etc-sysctl.d-k8s.conf
/etc/systemd/system/kubelet.service.d: linked: link-etc-systemd-system-kubelet.service.d
/var/log/calico/cni/cni.log: linked: link-var-log-calico-cni-cni.log
________________________________________________________________________________________________________________________
Prepending kubeadm version to package version
________________________________________________________________________________________________________________________
Renaming version folder: /opt/csap/csap-platform/working/kubelet/version/21.11 to 1.21.5--21.11
‘/opt/csap/csap-platform/working/kubelet/version/21.11’ -> ‘/opt/csap/csap-platform/working/kubelet/version/1.21.5--21.11’
________________________________________________________________________________________________________________________
check_to_untaint(): single master detected - removing master taints from: 'csap-dev04.lab.sensus.net'
________________________________________________________________________________________________________________________
node/csap-dev04.lab.sensus.net patched
________________________________________________________________________________________________________________________
Waiting for '3' calico-nodes to be started, prior to deployments on primary master.
________________________________________________________________________________________________________________________
NOTE: host dashboard may be used to terminate: csap-start.sh if desired.
________________________________________________________________________________________________________________________
Waiting for: '3' pods in run state: 'calico-node' in '--namespace=kube-system'.
________________________________________________________________________________________________________________________
------------------------------------------------- attempt 1 of 500 -------------------------------------------------
calico-node-jxl5q 1/1 Running 0 52s
------------------------------------------------- attempt 2 of 500 -------------------------------------------------
calico-node-jxl5q 1/1 Running 0 55s
------------------------------------------------- attempt 3 of 500 -------------------------------------------------
calico-node-jxl5q 1/1 Running 0 57s
------------------------------------------------- attempt 4 of 500 -------------------------------------------------
calico-node-jxl5q 1/1 Running 0 59s
------------------------------------------------- attempt 5 of 500 -------------------------------------------------
calico-node-jxl5q 1/1 Running 0 61s
------------------------------------------------- attempt 6 of 500 -------------------------------------------------
calico-node-jxl5q 1/1 Running 0 63s
------------------------------------------------- attempt 7 of 500 -------------------------------------------------
calico-node-jxl5q 1/1 Running 0 65s
------------------------------------------------- attempt 8 of 500 -------------------------------------------------
calico-node-jxl5q 1/1 Running 0 68s
------------------------------------------------- attempt 9 of 500 -------------------------------------------------
calico-node-4z245 0/1 Pending 0 1s
calico-node-jxl5q 1/1 Running 0 70s
------------------------------------------------ attempt 10 of 500 ------------------------------------------------
calico-node-4z245 0/1 Init:0/3 0 3s
calico-node-jxl5q 1/1 Running 0 72s
------------------------------------------------ attempt 11 of 500 ------------------------------------------------
calico-node-4z245 0/1 Init:0/3 0 5s
calico-node-jxl5q 1/1 Running 0 74s
------------------------------------------------ attempt 12 of 500 ------------------------------------------------
calico-node-4z245 0/1 Init:1/3 0 7s
calico-node-jxl5q 1/1 Running 0 76s
------------------------------------------------ attempt 13 of 500 ------------------------------------------------
calico-node-4z245 0/1 PodInitializing 0 9s
calico-node-jxl5q 1/1 Running 0 78s
------------------------------------------------ attempt 14 of 500 ------------------------------------------------
calico-node-4z245 0/1 PodInitializing 0 12s
calico-node-jxl5q 1/1 Running 0 81s
------------------------------------------------ attempt 15 of 500 ------------------------------------------------
calico-node-4z245 0/1 PodInitializing 0 14s
calico-node-jxl5q 1/1 Running 0 83s
------------------------------------------------ attempt 16 of 500 ------------------------------------------------
calico-node-4z245 0/1 Running 0 16s
calico-node-jxl5q 1/1 Running 0 85s
------------------------------------------------ attempt 17 of 500 ------------------------------------------------
calico-node-4z245 0/1 Running 0 18s
calico-node-jxl5q 1/1 Running 0 87s
------------------------------------------------ attempt 18 of 500 ------------------------------------------------
calico-node-4z245 0/1 Running 0 20s
calico-node-jxl5q 1/1 Running 0 89s
------------------------------------------------ attempt 19 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 22s
calico-node-jxl5q 1/1 Running 0 91s
------------------------------------------------ attempt 20 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 24s
calico-node-jxl5q 1/1 Running 0 93s
------------------------------------------------ attempt 21 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 27s
calico-node-jxl5q 1/1 Running 0 96s
------------------------------------------------ attempt 22 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 29s
calico-node-jxl5q 1/1 Running 0 98s
------------------------------------------------ attempt 23 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 31s
calico-node-jxl5q 1/1 Running 0 100s
------------------------------------------------ attempt 24 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 33s
calico-node-jxl5q 1/1 Running 0 102s
------------------------------------------------ attempt 25 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 35s
calico-node-jxl5q 1/1 Running 0 104s
------------------------------------------------ attempt 26 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 37s
calico-node-jxl5q 1/1 Running 0 106s
------------------------------------------------ attempt 27 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 40s
calico-node-jxl5q 1/1 Running 0 109s
------------------------------------------------ attempt 28 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 42s
calico-node-jxl5q 1/1 Running 0 111s
------------------------------------------------ attempt 29 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 44s
calico-node-jxl5q 1/1 Running 0 113s
------------------------------------------------ attempt 30 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 47s
calico-node-jxl5q 1/1 Running 0 116s
------------------------------------------------ attempt 31 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 49s
calico-node-jxl5q 1/1 Running 0 118s
------------------------------------------------ attempt 32 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 51s
calico-node-jxl5q 1/1 Running 0 2m
------------------------------------------------ attempt 33 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 53s
calico-node-jxl5q 1/1 Running 0 2m2s
------------------------------------------------ attempt 34 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 55s
calico-node-jxl5q 1/1 Running 0 2m4s
------------------------------------------------ attempt 35 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 57s
calico-node-jxl5q 1/1 Running 0 2m6s
------------------------------------------------ attempt 36 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 60s
calico-node-jxl5q 1/1 Running 0 2m9s
------------------------------------------------ attempt 37 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 62s
calico-node-jxl5q 1/1 Running 0 2m11s
------------------------------------------------ attempt 38 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 64s
calico-node-jxl5q 1/1 Running 0 2m13s
------------------------------------------------ attempt 39 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 66s
calico-node-jxl5q 1/1 Running 0 2m15s
------------------------------------------------ attempt 40 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 68s
calico-node-jxl5q 1/1 Running 0 2m17s
------------------------------------------------ attempt 41 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 71s
calico-node-jxl5q 1/1 Running 0 2m20s
------------------------------------------------ attempt 42 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 73s
calico-node-jxl5q 1/1 Running 0 2m22s
------------------------------------------------ attempt 43 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 75s
calico-node-jxl5q 1/1 Running 0 2m24s
------------------------------------------------ attempt 44 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 77s
calico-node-jxl5q 1/1 Running 0 2m26s
------------------------------------------------ attempt 45 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 79s
calico-node-jxl5q 1/1 Running 0 2m28s
------------------------------------------------ attempt 46 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 81s
calico-node-jxl5q 1/1 Running 0 2m30s
------------------------------------------------ attempt 47 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 84s
calico-node-jxl5q 1/1 Running 0 2m33s
------------------------------------------------ attempt 48 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 86s
calico-node-jxl5q 1/1 Running 0 2m35s
------------------------------------------------ attempt 49 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 88s
calico-node-jxl5q 1/1 Running 0 2m37s
------------------------------------------------ attempt 50 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 90s
calico-node-jxl5q 1/1 Running 0 2m39s
------------------------------------------------ attempt 51 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 92s
calico-node-jxl5q 1/1 Running 0 2m41s
------------------------------------------------ attempt 52 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 94s
calico-node-jxl5q 1/1 Running 0 2m43s
------------------------------------------------ attempt 53 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 97s
calico-node-jxl5q 1/1 Running 0 2m46s
------------------------------------------------ attempt 54 of 500 ------------------------------------------------
calico-node-4z245 1/1 Running 0 99s
calico-node-jxl5q 1/1 Running 0 2m48s
------------------------------------------------ attempt 55 of 500 ------------------------------------------------
calico-node-2v8wn 0/1 Pending 0 1s
calico-node-4z245 1/1 Running 0 101s
calico-node-jxl5q 1/1 Running 0 2m50s
------------------------------------------------ attempt 56 of 500 ------------------------------------------------
calico-node-2v8wn 0/1 Init:0/3 0 3s
calico-node-4z245 1/1 Running 0 103s
calico-node-jxl5q 1/1 Running 0 2m52s
------------------------------------------------ attempt 57 of 500 ------------------------------------------------
calico-node-2v8wn 0/1 Init:0/3 0 5s
calico-node-4z245 1/1 Running 0 105s
calico-node-jxl5q 1/1 Running 0 2m54s
------------------------------------------------ attempt 58 of 500 ------------------------------------------------
calico-node-2v8wn 0/1 Init:2/3 0 7s
calico-node-4z245 1/1 Running 0 107s
calico-node-jxl5q 1/1 Running 0 2m56s
------------------------------------------------ attempt 59 of 500 ------------------------------------------------
calico-node-2v8wn 0/1 PodInitializing 0 10s
calico-node-4z245 1/1 Running 0 110s
calico-node-jxl5q 1/1 Running 0 2m59s
------------------------------------------------ attempt 60 of 500 ------------------------------------------------
calico-node-2v8wn 0/1 PodInitializing 0 12s
calico-node-4z245 1/1 Running 0 112s
calico-node-jxl5q 1/1 Running 0 3m1s
------------------------------------------------ attempt 61 of 500 ------------------------------------------------
calico-node-2v8wn 0/1 PodInitializing 0 14s
calico-node-4z245 1/1 Running 0 114s
calico-node-jxl5q 1/1 Running 0 3m3s
------------------------------------------------ attempt 62 of 500 ------------------------------------------------
calico-node-2v8wn 0/1 Running 0 16s
calico-node-4z245 1/1 Running 0 116s
calico-node-jxl5q 1/1 Running 0 3m5s
Pod pattern: 'calico-node', '3' in running state
12/08/2021 12:04:18 [email protected] Running: sanity-tests.sh
*
**
***
****
***** dns_tests
****
***
**
*
No resources found in default namespace.
Not Found: 'dnsutils' in '--namespace=default'
kubectl create --filename=/opt/csap/csap-platform/working/kubelet/os-configuration-folders/dnsutils.yml:
________________________________________________________________________________________________________________________
pod/dnsutils created
________________________________________________________________________________________________________________________
________________________________________________________________________________________________________________________
Waiting for: '1' pods with conditions passed: 'dnsutils' in 'all'.
________________________________________________________________________________________________________________________
2 of 4 conditions: dnsutils: ContainersReady False
attempt 1 of 200: 'dnsutils', podsAllPassed: 0. Minimum: 1'
2 of 4 conditions: dnsutils: ContainersReady False
attempt 2 of 200: 'dnsutils', podsAllPassed: 0. Minimum: 1'
4 of 4 conditions: dnsutils:
attempt 3 of 200: 'dnsutils', podsAllPassed: 1. Minimum: 1'
Pod pattern: 'dnsutils', '' in running state
kubectl exec -it dnsutils --namespace=default -- nslookup kubernetes.default:
________________________________________________________________________________________________________________________
Unable to use a TTY - input is not a terminal or the right kind of file
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: kubernetes.default.svc.cluster.local
Address: 10.96.0.1
________________________________________________________________________________________________________________________
kubectl exec -it dnsutils --namespace=default -- nslookup google.com:
________________________________________________________________________________________________________________________
Unable to use a TTY - input is not a terminal or the right kind of file
Server: 10.96.0.10
Address: 10.96.0.10#53
Non-authoritative answer:
Name: google.com
Address: 64.233.185.101
Name: google.com
Address: 64.233.185.102
Name: google.com
Address: 64.233.185.138
Name: google.com
Address: 64.233.185.139
Name: google.com
Address: 64.233.185.100
Name: google.com
Address: 64.233.185.113
Name: google.com
Address: 2607:f8b0:4002:c09::66
Name: google.com
Address: 2607:f8b0:4002:c09::64
Name: google.com
Address: 2607:f8b0:4002:c09::8b
Name: google.com
Address: 2607:f8b0:4002:c09::65
________________________________________________________________________________________________________________________
pod "dnsutils" deleted
Renable scheduling on csap-dev04.lab.sensus.net:
________________________________________________________________________________________________________________________
node/csap-dev04.lab.sensus.net already uncordoned
________________________________________________________________________________________________________________________
12/08/2021 12:04:58 [email protected] Running: sanity-tests.sh
*
**
***
****
***** status_tests
****
***
**
*
________________________________________________________________________________________________________________________
Querying nodes - summary
________________________________________________________________________________________________________________________
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
csap-dev04.lab.sensus.net Ready control-plane,master 4m2s v1.21.5 10.22.14.244 <none> CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://20.10.6
csap-dev05.lab.sensus.net Ready <none> 56s v1.21.5 10.22.14.247 <none> CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://20.10.6
csap-dev06.lab.sensus.net Ready <none> 2m36s v1.21.5 10.22.15.8 <none> CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://20.10.6
________________________________________________________________________________________________________________________
Querying nodes - taints
________________________________________________________________________________________________________________________
NAME TAINTS
csap-dev04.lab.sensus.net <none>
csap-dev05.lab.sensus.net <none>
csap-dev06.lab.sensus.net <none>
________________________________________________________________________________________________________________________
Querying pods: 'kubectl get pods --all-namespaces'
________________________________________________________________________________________________________________________
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-67bb5696f5-sk9c4 1/1 Running 0 3m46s
kube-system calico-node-2v8wn 1/1 Running 0 57s
kube-system calico-node-4z245 1/1 Running 0 2m37s
kube-system calico-node-jxl5q 1/1 Running 0 3m46s
kube-system coredns-558bd4d5db-42dg7 1/1 Running 0 3m46s
kube-system coredns-558bd4d5db-hwwsw 1/1 Running 0 3m46s
kube-system etcd-csap-dev04.lab.sensus.net 1/1 Running 0 4m2s
kube-system kube-apiserver-csap-dev04.lab.sensus.net 1/1 Running 0 3m54s
kube-system kube-controller-manager-csap-dev04.lab.sensus.net 1/1 Running 0 3m54s
kube-system kube-proxy-8v6cm 1/1 Running 0 2m37s
kube-system kube-proxy-gcmdn 1/1 Running 0 57s
kube-system kube-proxy-snffc 1/1 Running 0 3m46s
kube-system kube-scheduler-csap-dev04.lab.sensus.net 1/1 Running 0 3m54s
________________________________________________________________________________________________________________________
Querying services: 'kubectl -n kube-system get service'
________________________________________________________________________________________________________________________
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 4m1s
...
________________________________________________________________________________________________________________________
Flag to exit read loop in AdminController.java XXXYYYZZZ_AdminController
Executing OS command on host csap-dev04:/opt/csap/csap-platform/bin/csap-job-run.sh_P_
12/08/2021 12:04:59 [email protected] Running: csap-job-run.sh
*
**
***
****
***** Service Job: kubelet
****
***
**
*
job: nfs_add_mount 10.22.10.59:/CSAP_DEV01_NFS /mnt/CSAP_DEV01_NFS
background: false
event: event-post-start
firstCommand: 'nfs_add_mount'
type: 'function'
-------------------------------------------------- command output --------------------------------------------------
________________________________________________________________________________________________________________________
nfs_add_mount() - mount source: '10.22.10.59:/CSAP_DEV01_NFS', mount_target: '/mnt/CSAP_DEV01_NFS', mount_options: ''vers=3'', exitIfPresent: true
________________________________________________________________________________________________________________________
________________________________________________________________________________________________________________________
INFO: existing mount found '10.22.10.59:/CSAP_DEV01_NFS', skipping remaining commands.
________________________________________________________________________________________________________________________
__COMPLETED__
*** 12:04:59, December 8 2021
csap-dev04 11:58:04 December 8 2021
Executing OS command on host csap-dev04:/opt/csap/csap-platform/bin/csap-start.sh_P_
12/08/2021 11:58:04 [email protected] Running: csap-start.sh
cli: process_csap_cli_args() parsing parameters
JAVA_HOME: '/opt/csap/java/openjdk-11.0.12_7'
java -version: 'OpenJDK 64-Bit Server VM 18.9 (build 11.0.12+7, mixed mode)'
*
**
***
****
***** csap-start.sh: ensuring processes are stopped
****
***
**
*
process check: No existing processes found
________________________________________________________________________________________________________________________
wait_for_port_free csap-start.sh: pre start check - waiting for filter: '4243' to not be found in 'ss --numeric --processes' (--listen)
________________________________________________________________________________________________________________________
attempt 1 of 2: found: '0'
No remaining instances of '4243'
working folder: mkdir: created directory ‘/opt/csap/csap-platform/working/docker’
csap api: checking for /opt/csap/csap-platform/working/docker/csap-api.sh
extracting: /opt/csap/csap-platform/packages/docker.zip to /opt/csap/csap-platform/working/docker
scripts: Found scripts in package, running dos2unix
permissions: running chmod -R 755 /opt/csap/csap-platform/working/docker
Loading: csap-api.sh
________________________________________________________________________________________________________________________
No match 'csap-dev04' in neverRemoteHosts: 'csap-dev01,csap-dev02'
________________________________________________________________________________________________________________________
*
**
***
****
***** CSAP docker package
****
***
**
*
dockerPackage: docker-ce-20.10.6-3.el7
dockerCliPackage: docker-ce-cli-1:20.10.6-3.el7
dockerRepo: http://media.lab.sensus.net/media/third_party/linux/CentOS/docker-ce.repo
dockerStorage: /var/lib/docker
dockerStorageDriver: overlay2
allowRemote: true
neverRemoteHosts: csap-dev01,csap-dev02
dockerCommandsScript: root-commands.sh
________________________________________________________________________________________________________________________
api_service_start
________________________________________________________________________________________________________________________
backup_file: Backing up /opt/csap/csap-platform/working/docker/configuration to /opt/csap/csap-platform/working/docker/configuration.original
service resources: copying '/opt/csap/csap-platform/definition/resources/docker/common'
‘/opt/csap/csap-platform/definition/resources/docker/common/configuration/daemon.json’ -> ‘/opt/csap/csap-platform/working/docker/configuration/daemon.json’
‘/opt/csap/csap-platform/definition/resources/docker/common/csap-service.json’ -> ‘/opt/csap/csap-platform/working/docker/csap-service.json’
‘/opt/csap/csap-platform/definition/resources/docker/common/csap-service.yaml’ -> ‘/opt/csap/csap-platform/working/docker/csap-service.yaml’
append_file: Note: specified targetFile '/opt/csap/csap-platform/working/docker/install-variables.sh', does not exist, creating
append_file() : line: '# generated file' to file: '/opt/csap/csap-platform/working/docker/install-variables.sh'
________________________________________________________________________________________________________________________
'/opt/csap/csap-platform/working/docker/install-variables.sh':
# generated file
#
export csapUser=csap
export csapWorkingDir=/opt/csap/csap-platform/working/docker
export csapPrimaryPort=4243
export dockerStorage=/var/lib/docker
export dockerStorageDriver=overlay2
export allowRemote=true
export dockerPackage=docker-ce-20.10.6-3.el7
export dockerCliPackage=docker-ce-cli-1:20.10.6-3.el7
export dockerRepo=http://media.lab.sensus.net/media/third_party/linux/CentOS/docker-ce.repo
________________________________________________________________________________________________________________________
Warning - Did not find CSAP_FOLDER env variable, exported as: '/opt/csap/csap-platform'
12/08/2021 11:58:05 [email protected] Running: csap-deploy-as-root.sh
________________________________________________________________________________________________________________________
user: 'csap', script: 'csap-deploy-as-root.sh': 'install', dockerStorage: '/var/lib/docker', allowRemote: 'true'
dockerPackage: 'docker-ce-20.10.6-3.el7', dockerRepo: 'http://media.lab.sensus.net/media/third_party/linux/CentOS/docker-ce.repo'
Learn more about docker-ce: https://docs.docker.com/install/linux/docker-ce/centos/
________________________________________________________________________________________________________________________
package docker-ce-20.10.6-3.el7 is not installed
________________________________________________________________________________________________________________________
Starting install of docker-ce-20.10.6-3.el7, ref: 'https://docs.docker.com/install/linux/docker-ce/centos/'
________________________________________________________________________________________________________________________
adding repo: http://media.lab.sensus.net/media/third_party/linux/CentOS/docker-ce.repo
yum enabled repositories:
________________________________________________________________________________________________________________________
base/7/x86_64 CentOS-7 - Base enabled: 10,072
docker-ce-stable-community/7 Docker CE Stable - x86_64 enabled: 135
epel/x86_64 Extra Packages for Enterpris enabled: 13,690
extras/7/x86_64 CentOS-7 - Extras enabled: 500
updates/7/x86_64 CentOS-7 - Updates enabled: 3,187
________________________________________________________________________________________________________________________
----------------------- yum installing docker-ce-20.10.6-3.el7 docker-ce-cli-1:20.10.6-3.el7 -----------------------
Loaded plugins: fastestmirror, remove-with-leaves, versionlock
Loading mirror speeds from cached hostfile
* base: mirror.wdc1.us.leaseweb.net
* epel: d2lzkl7pfhq30w.cloudfront.net
* extras: www.gtlib.gatech.edu
* updates: mirror.atl.genesisadaptive.com
No package docker-ce-cli-1:20.10.6-3.el7 available.
Resolving Dependencies
--> Running transaction check
---> Package docker-ce.x86_64 3:20.10.6-3.el7 will be installed
--> Processing Dependency: container-selinux >= 2:2.74 for package: 3:docker-ce-20.10.6-3.el7.x86_64
--> Processing Dependency: containerd.io >= 1.4.1 for package: 3:docker-ce-20.10.6-3.el7.x86_64
--> Processing Dependency: docker-ce-cli for package: 3:docker-ce-20.10.6-3.el7.x86_64
--> Processing Dependency: docker-ce-rootless-extras for package: 3:docker-ce-20.10.6-3.el7.x86_64
--> Processing Dependency: libcgroup for package: 3:docker-ce-20.10.6-3.el7.x86_64
--> Running transaction check
---> Package container-selinux.noarch 2:2.119.2-1.911c772.el7_8 will be installed
--> Processing Dependency: policycoreutils-python for package: 2:container-selinux-2.119.2-1.911c772.el7_8.noarch
---> Package containerd.io.x86_64 0:1.4.12-3.1.el7 will be installed
---> Package docker-ce-cli.x86_64 1:20.10.11-3.el7 will be installed
--> Processing Dependency: docker-scan-plugin(x86-64) for package: 1:docker-ce-cli-20.10.11-3.el7.x86_64
---> Package docker-ce-rootless-extras.x86_64 0:20.10.11-3.el7 will be installed
--> Processing Dependency: fuse-overlayfs >= 0.7 for package: docker-ce-rootless-extras-20.10.11-3.el7.x86_64
--> Processing Dependency: slirp4netns >= 0.4 for package: docker-ce-rootless-extras-20.10.11-3.el7.x86_64
---> Package libcgroup.x86_64 0:0.41-21.el7 will be installed
--> Running transaction check
---> Package docker-scan-plugin.x86_64 0:0.9.0-3.el7 will be installed
---> Package fuse-overlayfs.x86_64 0:0.7.2-6.el7_8 will be installed
--> Processing Dependency: libfuse3.so.3(FUSE_3.2)(64bit) for package: fuse-overlayfs-0.7.2-6.el7_8.x86_64
--> Processing Dependency: libfuse3.so.3(FUSE_3.0)(64bit) for package: fuse-overlayfs-0.7.2-6.el7_8.x86_64
--> Processing Dependency: libfuse3.so.3()(64bit) for package: fuse-overlayfs-0.7.2-6.el7_8.x86_64
---> Package policycoreutils-python.x86_64 0:2.5-34.el7 will be installed
--> Processing Dependency: setools-libs >= 3.3.8-4 for package: policycoreutils-python-2.5-34.el7.x86_64
--> Processing Dependency: libsemanage-python >= 2.5-14 for package: policycoreutils-python-2.5-34.el7.x86_64
--> Processing Dependency: audit-libs-python >= 2.1.3-4 for package: policycoreutils-python-2.5-34.el7.x86_64
--> Processing Dependency: python-IPy for package: policycoreutils-python-2.5-34.el7.x86_64
--> Processing Dependency: libqpol.so.1(VERS_1.4)(64bit) for package: policycoreutils-python-2.5-34.el7.x86_64
--> Processing Dependency: libqpol.so.1(VERS_1.2)(64bit) for package: policycoreutils-python-2.5-34.el7.x86_64
--> Processing Dependency: libapol.so.4(VERS_4.0)(64bit) for package: policycoreutils-python-2.5-34.el7.x86_64
--> Processing Dependency: checkpolicy for package: policycoreutils-python-2.5-34.el7.x86_64
--> Processing Dependency: libqpol.so.1()(64bit) for package: policycoreutils-python-2.5-34.el7.x86_64
--> Processing Dependency: libapol.so.4()(64bit) for package: policycoreutils-python-2.5-34.el7.x86_64
---> Package slirp4netns.x86_64 0:0.4.3-4.el7_8 will be installed
--> Running transaction check
---> Package audit-libs-python.x86_64 0:2.8.5-4.el7 will be installed
---> Package checkpolicy.x86_64 0:2.5-8.el7 will be installed
---> Package fuse3-libs.x86_64 0:3.6.1-4.el7 will be installed
---> Package libsemanage-python.x86_64 0:2.5-14.el7 will be installed
---> Package python-IPy.noarch 0:0.75-6.el7 will be installed
---> Package setools-libs.x86_64 0:3.3.8-4.el7 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
docker-ce x86_64 3:20.10.6-3.el7 docker-ce-stable-community 27 M
Installing for dependencies:
audit-libs-python x86_64 2.8.5-4.el7 base 76 k
checkpolicy x86_64 2.5-8.el7 base 295 k
container-selinux noarch 2:2.119.2-1.911c772.el7_8
extras 40 k
containerd.io x86_64 1.4.12-3.1.el7 docker-ce-stable-community 28 M
docker-ce-cli x86_64 1:20.10.11-3.el7 docker-ce-stable-community 29 M
docker-ce-rootless-extras
x86_64 20.10.11-3.el7 docker-ce-stable-community 8.0 M
docker-scan-plugin x86_64 0.9.0-3.el7 docker-ce-stable-community 3.7 M
fuse-overlayfs x86_64 0.7.2-6.el7_8 extras 54 k
fuse3-libs x86_64 3.6.1-4.el7 extras 82 k
libcgroup x86_64 0.41-21.el7 base 66 k
libsemanage-python x86_64 2.5-14.el7 base 113 k
policycoreutils-python
x86_64 2.5-34.el7 base 457 k
python-IPy noarch 0.75-6.el7 base 32 k
setools-libs x86_64 3.3.8-4.el7 base 620 k
slirp4netns x86_64 0.4.3-4.el7_8 extras 81 k
Transaction Summary
================================================================================
Install 1 Package (+15 Dependent packages)
Total download size: 98 M
Installed size: 401 M
Downloading packages:
--------------------------------------------------------------------------------
Total 65 MB/s | 98 MB 00:01
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : docker-scan-plugin-0.9.0-3.el7.x86_64 1/16
Installing : 1:docker-ce-cli-20.10.11-3.el7.x86_64 2/16
Installing : libcgroup-0.41-21.el7.x86_64 3/16
Installing : setools-libs-3.3.8-4.el7.x86_64 4/16
Installing : audit-libs-python-2.8.5-4.el7.x86_64 5/16
Installing : checkpolicy-2.5-8.el7.x86_64 6/16
Installing : python-IPy-0.75-6.el7.noarch 7/16
Installing : slirp4netns-0.4.3-4.el7_8.x86_64 8/16
Installing : libsemanage-python-2.5-14.el7.x86_64 9/16
Installing : policycoreutils-python-2.5-34.el7.x86_64 10/16
Installing : 2:container-selinux-2.119.2-1.911c772.el7_8.noarch 11/16
Installing : containerd.io-1.4.12-3.1.el7.x86_64 12/16
Installing : fuse3-libs-3.6.1-4.el7.x86_64 13/16
Installing : fuse-overlayfs-0.7.2-6.el7_8.x86_64 14/16
Installing : docker-ce-rootless-extras-20.10.11-3.el7.x86_64 15/16
Installing : 3:docker-ce-20.10.6-3.el7.x86_64 16/16
Verifying : 1:docker-ce-cli-20.10.11-3.el7.x86_64 1/16
Verifying : docker-scan-plugin-0.9.0-3.el7.x86_64 2/16
Verifying : fuse3-libs-3.6.1-4.el7.x86_64 3/16
Verifying : fuse-overlayfs-0.7.2-6.el7_8.x86_64 4/16
Verifying : libsemanage-python-2.5-14.el7.x86_64 5/16
Verifying : 3:docker-ce-20.10.6-3.el7.x86_64 6/16
Verifying : slirp4netns-0.4.3-4.el7_8.x86_64 7/16
Verifying : 2:container-selinux-2.119.2-1.911c772.el7_8.noarch 8/16
Verifying : python-IPy-0.75-6.el7.noarch 9/16
Verifying : checkpolicy-2.5-8.el7.x86_64 10/16
Verifying : docker-ce-rootless-extras-20.10.11-3.el7.x86_64 11/16
Verifying : policycoreutils-python-2.5-34.el7.x86_64 12/16
Verifying : audit-libs-python-2.8.5-4.el7.x86_64 13/16
Verifying : setools-libs-3.3.8-4.el7.x86_64 14/16
Verifying : containerd.io-1.4.12-3.1.el7.x86_64 15/16
Verifying : libcgroup-0.41-21.el7.x86_64 16/16
Installed:
docker-ce.x86_64 3:20.10.6-3.el7
Dependency Installed:
audit-libs-python.x86_64 0:2.8.5-4.el7
checkpolicy.x86_64 0:2.5-8.el7
container-selinux.noarch 2:2.119.2-1.911c772.el7_8
containerd.io.x86_64 0:1.4.12-3.1.el7
docker-ce-cli.x86_64 1:20.10.11-3.el7
docker-ce-rootless-extras.x86_64 0:20.10.11-3.el7
docker-scan-plugin.x86_64 0:0.9.0-3.el7
fuse-overlayfs.x86_64 0:0.7.2-6.el7_8
fuse3-libs.x86_64 0:3.6.1-4.el7
libcgroup.x86_64 0:0.41-21.el7
libsemanage-python.x86_64 0:2.5-14.el7
policycoreutils-python.x86_64 0:2.5-34.el7
python-IPy.noarch 0:0.75-6.el7
setools-libs.x86_64 0:3.3.8-4.el7
slirp4netns.x86_64 0:0.4.3-4.el7_8
Complete!
yum version lock: 'docker-ce-20.10.6-3.el7 docker-ce-cli-1:20.10.6-3.el7':
________________________________________________________________________________________________________________________
Loaded plugins: fastestmirror, remove-with-leaves, versionlock
Adding versionlock on: 3:docker-ce-20.10.6-3.el7
versionlock added: 1
________________________________________________________________________________________________________________________
-------------------------------- Updating docker settings:'/etc/docker/daemon.json' --------------------------------
Updating /etc/docker/daemon.json with /opt/csap/csap-platform/working/docker/configuration/daemon.json. /etc/docker/daemon.json is being backed up
WARNING: Exposing host to remote connections: 'tcp://0.0.0.0:4243'
replace_all_in_file: Replacing 1 occurences of '_CSAP_ALLOW_REMOTE_' with 'tcp://0.0.0.0:4243' in '/etc/docker/daemon.json'
replace_all_in_file: Replacing 1 occurences of '_CSAP_STORAGE_' with '/var/lib/docker' in '/etc/docker/daemon.json'
replace_all_in_file: Replacing 1 occurences of '_CSAP_DRIVER_' with 'overlay2' in '/etc/docker/daemon.json'
configuration file: '/etc/docker/daemon.json':
________________________________________________________________________________________________________________________
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": [
"http://docker.lab.sensus.net"
],
"insecure-registries": [
"sensus-docker.lab.sensus.net",
"docker.lab.sensus.net"
],
"storage-driver": "overlay2",
"data-root": "/var/lib/docker",
"hosts": [
"unix:///var/run/docker.sock",
"tcp://0.0.0.0:4243"
],
"log-driver": "json-file",
"log-opts": {
"max-size": "30m",
"max-file": "1"
}
}
________________________________________________________________________________________________________________________
Found existing dockerStorage '/var/lib/docker'.
configuration: creating file: '/etc/systemd/system/docker.service.d/docker.conf'
mkdir: created directory ‘/etc/systemd/system/docker.service.d’
configuration: '/etc/systemd/system/docker.service.d/docker.conf':
________________________________________________________________________________________________________________________
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd --containerd=/run/containerd/containerd.sock
________________________________________________________________________________________________________________________
Creating preserve file: '/var/lib/docker/preserve-over-reinstalls'. Use file manager or run clean to remove
removed ‘/opt/csap/csap-platform/working/docker/install-variables.sh’
append_file: Note: specified targetFile '/opt/csap/csap-platform/working/docker/install-variables.sh', does not exist, creating
append_file() : line: '# generated file' to file: '/opt/csap/csap-platform/working/docker/install-variables.sh'
________________________________________________________________________________________________________________________
'/opt/csap/csap-platform/working/docker/install-variables.sh':
# generated file
#
export csapUser=csap
export csapWorkingDir=/opt/csap/csap-platform/working/docker
export csapPrimaryPort=4243
export dockerStorage=/var/lib/docker
export dockerStorageDriver=overlay2
export allowRemote=true
export dockerPackage=docker-ce-20.10.6-3.el7
export dockerCliPackage=docker-ce-cli-1:20.10.6-3.el7
export dockerRepo=http://media.lab.sensus.net/media/third_party/linux/CentOS/docker-ce.repo
________________________________________________________________________________________________________________________
Warning - Did not find CSAP_FOLDER env variable, exported as: '/opt/csap/csap-platform'
12/08/2021 11:58:49 [email protected] Running: csap-deploy-as-root.sh
________________________________________________________________________________________________________________________
user: 'csap', script: 'csap-deploy-as-root.sh': 'start', dockerStorage: '/var/lib/docker', allowRemote: 'true'
dockerPackage: 'docker-ce-20.10.6-3.el7', dockerRepo: 'http://media.lab.sensus.net/media/third_party/linux/CentOS/docker-ce.repo'
Learn more about docker-ce: https://docs.docker.com/install/linux/docker-ce/centos/
________________________________________________________________________________________________________________________
________________________________________________________________________________________________________________________
starting docker and enabling via systemctl
________________________________________________________________________________________________________________________
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
________________________________________________________________________________________________________________________
systemctl status docker.service
________________________________________________________________________________________________________________________
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/docker.service.d
└─docker.conf
Active: active (running) since Wed 2021-12-08 11:59:06 EST; 3s ago
Docs: https://docs.docker.com
Main PID: 28868 (dockerd)
CGroup: /system.slice/docker.service
└─28868 /usr/bin/dockerd --containerd=/run/containerd/containerd.sock
Dec 08 11:59:06 csap-dev04.lab.sensus.net dockerd[28868]: time="2021-12-08T11:59:06.017543214-05:00" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Dec 08 11:59:06 csap-dev04.lab.sensus.net dockerd[28868]: time="2021-12-08T11:59:06.017552481-05:00" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Dec 08 11:59:06 csap-dev04.lab.sensus.net dockerd[28868]: time="2021-12-08T11:59:06.037874256-05:00" level=info msg="Loading containers: start."
Dec 08 11:59:06 csap-dev04.lab.sensus.net dockerd[28868]: time="2021-12-08T11:59:06.105090793-05:00" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Dec 08 11:59:06 csap-dev04.lab.sensus.net dockerd[28868]: time="2021-12-08T11:59:06.139478669-05:00" level=info msg="Loading containers: done."
Dec 08 11:59:06 csap-dev04.lab.sensus.net dockerd[28868]: time="2021-12-08T11:59:06.163508380-05:00" level=info msg="Docker daemon" commit=8728dd2 graphdriver(s)=overlay2 version=20.10.6
Dec 08 11:59:06 csap-dev04.lab.sensus.net dockerd[28868]: time="2021-12-08T11:59:06.163650035-05:00" level=info msg="Daemon has completed initialization"
Dec 08 11:59:06 csap-dev04.lab.sensus.net systemd[1]: Started Docker Application Container Engine.
Dec 08 11:59:06 csap-dev04.lab.sensus.net dockerd[28868]: time="2021-12-08T11:59:06.199792819-05:00" level=info msg="API listen on /var/run/docker.sock"
Dec 08 11:59:06 csap-dev04.lab.sensus.net dockerd[28868]: time="2021-12-08T11:59:06.203931873-05:00" level=info msg="API listen on [::]:4243"
Creating configuration shortcuts in /opt/csap/csap-platform/working/docker/docker-os-references
/etc/docker: linked: link-etc-docker
/var/lib/docker: linked: link-var-lib-docker
Prepending docker version to package version
Renaming version folder: /opt/csap/csap-platform/working/docker/version/21.10 to 20.10.11--21.10
‘/opt/csap/csap-platform/working/docker/version/21.10’ -> ‘/opt/csap/csap-platform/working/docker/version/20.10.11--21.10’
________________________________________________________________________________________________________________________
post_start_status_check - id uid=1000(csap) gid=1000(csap) groups=1000(csap),993(docker) context=system_u:system_r:unconfined_service_t:s0
________________________________________________________________________________________________________________________
12/08/2021 11:59:09 [email protected] Running: csap-start.sh
cli: process_csap_cli_args() parsing parameters
JAVA_HOME: '/opt/csap/java/openjdk-11.0.12_7'
java -version: 'OpenJDK 64-Bit Server VM 18.9 (build 11.0.12+7, mixed mode)'
________________________________________________________________________________________________________________________
uncomment the tests to run
________________________________________________________________________________________________________________________
________________________________________________________________________________________________________________________
status_tests
________________________________________________________________________________________________________________________
WARNING: API is accessible on http://0.0.0.0:4243 without encryption.
Access to the remote API is equivalent to root access on the host. Refer
to the 'Docker daemon attack surface' section in the documentation for
more information: https://docs.docker.com/go/attack-surface/
docker info:
________________________________________________________________________________________________________________________
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Build with BuildKit (Docker Inc., v0.6.3-docker)
scan: Docker Scan (Docker Inc., v0.9.0)
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 20.10.6
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 7b11cfaabd73bb80907dd23182b9347b4245eb5d
runc version: v1.0.2-0-g52b36a2
init version: de40ad0
Security Options:
seccomp
Profile: default
Kernel Version: 3.10.0-1127.19.1.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 16
Total Memory: 31.26GiB
Name: csap-dev04.lab.sensus.net
ID: QDIU:FV74:PAWF:TVV7:ZDES:MA5L:GG2G:RTAU:V3WM:NQEF:JMT4:SOEA
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
docker.lab.sensus.net
sensus-docker.lab.sensus.net
127.0.0.0/8
Registry Mirrors:
http://docker.lab.sensus.net/
Live Restore Enabled: false
________________________________________________________________________________________________________________________
docker ps:
________________________________________________________________________________________________________________________
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
________________________________________________________________________________________________________________________
________________________________________________________________________________________________________________________
runtime tests
________________________________________________________________________________________________________________________
docker network create --driver bridge verify_bridge:
________________________________________________________________________________________________________________________
306814d211cf7b08fcd8ef77068780b9ad278b7320099c405aebe5244e743c1a
________________________________________________________________________________________________________________________
Unable to find image 'nginx:1.21.3' locally
1.21.3: Pulling from library/nginx
b380bbd43752: Pulling fs layer
fca7e12d1754: Pulling fs layer
745ab57616cb: Pulling fs layer
a4723e260b6f: Pulling fs layer
1c84ebdff681: Pulling fs layer
858292fd2e56: Pulling fs layer
a4723e260b6f: Waiting
1c84ebdff681: Waiting
858292fd2e56: Waiting
745ab57616cb: Verifying Checksum
745ab57616cb: Download complete
a4723e260b6f: Verifying Checksum
a4723e260b6f: Download complete
1c84ebdff681: Verifying Checksum
1c84ebdff681: Download complete
858292fd2e56: Download complete
fca7e12d1754: Verifying Checksum
fca7e12d1754: Download complete
b380bbd43752: Verifying Checksum
b380bbd43752: Download complete
b380bbd43752: Pull complete
fca7e12d1754: Pull complete
745ab57616cb: Pull complete
a4723e260b6f: Pull complete
1c84ebdff681: Pull complete
858292fd2e56: Pull complete
Digest: sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36
Status: Downloaded newer image for nginx:1.21.3
docker run --detach --name verify-nginx --publish=6080:80 --network=verify_bridge nginx:1.21.3:
________________________________________________________________________________________________________________________
9af1235169e7bbec1f7ae5f13cb317080937c4794f03399c5b3e588aec16e1b5
________________________________________________________________________________________________________________________
docker ps:
________________________________________________________________________________________________________________________
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9af1235169e7 nginx:1.21.3 "/docker-entrypoint.…" 1 second ago Up Less than a second 0.0.0.0:6080->80/tcp, :::6080->80/tcp verify-nginx
________________________________________________________________________________________________________________________
nginx response with most html stripped: 'curl csap-dev04.lab.sensus.net:6080/':
________________________________________________________________________________________________________________________
Welcome to nginx!
If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.
For online documentation and support please refer to
nginx.org.
Commercial support is available at
nginx.com.
Thank you for using nginx.
________________________________________________________________________________________________________________________
________________________________________________________________________________________________________________________
SUCCESS: Found expected log message: Welcome to nginx!
________________________________________________________________________________________________________________________
2021/12/08 16:59:21 [notice] 1#1: using the "epoll" event method
2021/12/08 16:59:21 [notice] 1#1: nginx/1.21.3
2021/12/08 16:59:21 [notice] 1#1: built by gcc 8.3.0 (Debian 8.3.0-6)
2021/12/08 16:59:21 [notice] 1#1: OS: Linux 3.10.0-1127.19.1.el7.x86_64
2021/12/08 16:59:21 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2021/12/08 16:59:21 [notice] 1#1: start worker processes
2021/12/08 16:59:21 [notice] 1#1: start worker process 32
2021/12/08 16:59:21 [notice] 1#1: start worker process 33
2021/12/08 16:59:21 [notice] 1#1: start worker process 34
2021/12/08 16:59:21 [notice] 1#1: start worker process 35
2021/12/08 16:59:21 [notice] 1#1: start worker process 36
2021/12/08 16:59:21 [notice] 1#1: start worker process 37
2021/12/08 16:59:21 [notice] 1#1: start worker process 38
2021/12/08 16:59:21 [notice] 1#1: start worker process 39
2021/12/08 16:59:21 [notice] 1#1: start worker process 40
2021/12/08 16:59:21 [notice] 1#1: start worker process 41
2021/12/08 16:59:21 [notice] 1#1: start worker process 42
2021/12/08 16:59:21 [notice] 1#1: start worker process 43
2021/12/08 16:59:21 [notice] 1#1: start worker process 44
2021/12/08 16:59:21 [notice] 1#1: start worker process 45
2021/12/08 16:59:21 [notice] 1#1: start worker process 46
2021/12/08 16:59:21 [notice] 1#1: start worker process 47
docker logs verify-nginx:
________________________________________________________________________________________________________________________
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
172.18.0.1 - - [08/Dec/2021:16:59:21 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.29.0" "-"
________________________________________________________________________________________________________________________
docker rm --force verify-nginx:
________________________________________________________________________________________________________________________
verify-nginx
________________________________________________________________________________________________________________________
docker network rm verify_bridge:
________________________________________________________________________________________________________________________
verify_bridge
________________________________________________________________________________________________________________________
________________________________________________________________________________________________________________________
'csap' is a member of docker group:
________________________________________________________________________________________________________________________
...
________________________________________________________________________________________________________________________
Flag to exit read loop in AdminController.java XXXYYYZZZ_AdminController
__COMPLETED__
*** 11:59:22, December 8 2021