OpenShift - henk52/knowledgesharing GitHub Wiki
OpenShift
Introduction
References
Vocabulary
- HA: High availability.
- istio: service mesh? -
- Jaeger: Front end for OpenTracing?
- Kiali: (Jaeger and OpenTracing)
- OpenTracing: Vendor-neutral APIs and instrumentation for distributed tracing
- SSC: security context constraints.
Getting the OC command
- get the oc from:
- sudo scp openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit/oc /usr/local/bin
- oc version
Get OC on a commercial OpenShift server
- login to the OpenShift server
- click on the '?' next to your name in the top right corner
- You need a Red Hat account for this
- And your OS has to be the RedHat family.
- tar -xf oc-*.tar.gz
- sudo cp oc /usr/local/bin
- oc version
Command overview
- oc login
- oc project MY_PROJECT
- oc projects
- list projects
- oc status
- oc import-image YOU_IMAGE
- oc api-resources
- oc rollout history dc/NAME
- oc rollout history dc/filebench --revision=1
- oc rollout undo dc/NAME
Administration
- oc create sa myserviceaccount
- oc adm policy add-role-to-group view system:serviceaccounts -n myproject
- oc get serviceaccounts
- oc get clusterrole
- oc describe clusterrole/system:registry
- oc adm policy add-role-to-user edit bob
- I think -n for project name is a good idea.
- oc get rolebinding
- oc get secret
- oc export secret/registry-config
- master-restart api
- master-logs api api
- oc describe sa node-bootstrapper -n openshift-infra
- oc describe clusterrolebindings system:node-bootstrapper
- oc policy can-i --list --loglevel=8
commands for troubleshooting
- oc get events | grep ServiceAccount
- oc describe sa/proxy | grep -A5 Events
Authorization
-
oc sa get-token <serviceaccount_name>
-
oc whoami -t
-
You can also use the 'Code Grant' method to request a token
and stuff
- oc describe clusterrole.rbac admin basic-user
minikube commands
- minikube start --vm-driver kvm2
- minikube status
- minikube dashboard
- minikube ssh
Docker image of OKD
- sudo docker run -d --name "origin" --privileged --pid=host -v /:/rootfs:ro -v /var/run:/var/run:rw -v /sys:/sys -v /sys/fs/cgroup:/sys/fs/cgroup:rw -v /var/lib/docker:/var/lib/docker:rw -v /var/lib/origin/openshift.local.volumes:/var/lib/origin/openshift.local.volumes:rslave openshift/origin start
Troubleshooting DiD OKD
kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"
It's fixed by adapting the line in /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd
--exec-opt native.cgroupdriver=systemd
kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"
Failed to start ContainerManager Delegation not available for unit type
- cp /usr/lib/systemd/system/docker.service /etc/systemd/system/
- sed -i 's/cgroupdriver=systemd/cgroupdriver=cgroupfs/' /etc/systemd/system/docker.service
- sudo systemctl daemon-reload
- sudo systemctl restart docker
F0814 08:30:38.019520 10322 kubelet.go:1376] Failed to start ContainerManager Delegation not available for unit type
Kubernetes
- kubectl cluster-info
- kubectl config use-context minikube
- kubectl create
- kubectl create -f httpd-pod.yaml
- It seems this does not create the scheduler/deploy thing, so no restart etc.
- kubectl create -f httpd-pod.yaml
- kubectl delete
- pod
- kubectl delete pod httpd-8576c89d7-qjd62
- kubectl delete pod --all
- Thought the pods will be restarted by the resource thingy
- kubectl delete all --all
- This will delete all kubernetes resources.
- pod
- kubectl describe
- pod POD_NAME
- kubectl expose pod httpd-66c6df655-8h5f4 --port=80 --name=httpd-exposed - -type=NodePort
- kubectl edit
- deploy
- kubectl edit deploy httpd1
- This enables you to change the number of pods? for the 'httpd1' deploy
- kubectl edit deploy httpd1
- deploy
- kubectl get
- all
- deploy
- events
- kubectl get events --sort-by=.metadata.creationTimestamp | tail -n 8
- nodes
- pods
- kubectl get pods --selector='app=httpd-demo2'
- replicaset / rs
- services
- kubectl run httpd --image=httpd
- kubectl run httpd1 --image=httpd --labels="app=httpd-demo2"
cat httpd-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: httpd
namespace: default
spec:
containers:
- name: httpd-container
image: httpd
ports:
- containerPort: 80
kube name spaces
- default: The namespace where all resources without other namespaces are placed. It is used when the name of a namespace is not specified.
- kube-public: Used for resources that must be publicly available even to unauthenticated users.
- kube-system: As the name implies, it is used internally by Kubernetes itself for all system resources.
kustomize
Recipes
XXX
Assign a pod to a node, via a label
- oc get -o wide nodes
- Assign labels to nodes:
- oc label node NODE_NAME my_label=alpha
- on the Web:
- Select project
- select Application Console
- Click Overview
- Click the pod name (not the deployment)
- Click Actions -> Edit Yaml
- In the line after 'restartPolicy:' add:
- nodeSelector:
- my_label: alpha
- Save the yaml, the pod will restart
- and you now increase the count and all containers should go on the same node.
Planning
In addition to configuring HA for OpenShift Container Platform, you must separately configure HA for the API server load balancer. To configure HA, it is much preferred to integrate an enterprise load balancer (LB) such as an F5 Big-IP™ or a Citrix Netscaler™ appliance. If such solutions are not available, it is possible to run multiple HAProxy load balancers and use Keepalived to provide a floating virtual IP address for HA. However, this solution is not recommended for production instances. See:
since you need at least two master services for HA, it is common to maintain a uniform odd number of hosts when collocating master services and etcd. Because; clustered etcd requires an odd number of hosts for quorum.
operating OpenShift
XXXX
Deployment
If the deployment configuration changes, a new replication controller is created with the latest pod template, and a deployment process runs to scale down the old replication controller and scale up the new replication controller.
If 'revisionHistoryLimit:' isn't set then: old replication controllers will not be cleaned up.
oc rollout history dc/name
Install OpenShift
- sudo yum -y update
- sudo yum -y install git docker epel-release
- sudo yum -y install ansible
- sudo systemctl start docker
- sudo systemctl enable docker
- sudo systemctl status docker
- getenforce
- ensure selinux is enforcing
git clone https://github.com/openshift/openshift-ansible
- cd openshift-ansible
- git branch -r
- git checkout release-3.11
- sudo -i
- ssh-keygen
- cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys
minikube
- install the minikube
- minikube dashboard
Troubleshooting
XXXXX
nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:2
# users are not allowed to listen on priviliged ports
RUN sed -i.bak 's/listen\(.*\)80;/listen 8081;/' /etc/nginx/conf.d/default.conf
EXPOSE 8081
# comment user directive as master process is run as user in OpenShift anyhow
RUN sed -i.bak 's/^user/#user/' /etc/nginx/nginx.conf