OpenShift multitenancy - stanislawbartkowski/CP4D GitHub Wiki

https://docs.openshift.com/container-platform/4.7/networking/network_policy/multitenant-network-policy.html

A practical example of OpenShift network tenancy.

Test scenario

Create two projects/namespaces: test and test1. The test application is nginx.

oc new-project test
oc new-project test1
oc new-app --name nginx --docker-image docker.io/library/nginx:latest
oc set serviceaccount deployment nginx <anyuid enabled Service Account>

The command to access the nginx service.

curl -s <ip address> | grep Thank

<p><em>Thank you for using nginx.</em></p>

Two users

  • admin with admin-cluster authority
  • limited developer having edit authority.

It is expected that applications in the project can access services only in the same project and are blocked from accessing services in another project.

Create projects

As developer user.

oc new-project test
oc new-project test1
oc create serviceaccount supersa -n test
oc create serviceaccount supersa -n test1

As admin user

oc adm policy add-scc-to-user anyuid -z supersa -n test
oc adm policy add-scc-to-user anyuid -z supersa -n test1

As a developer user.

oc new-app --name app --docker-image docker.io/library/nginx:latest -n test
oc set serviceaccount deployment app supersa -n test
oc new-app --name app --docker-image docker.io/library/nginx:latest -n test1
oc set serviceaccount deployment app supersa -n test1

Expose services

oc expose service app -n test
oc expose service app -n test1

Test

Services are accessible from outside, service from test can access service test1 and opposite.

Run

oc get route -n test
oc get svc -n test
oc get route -n test

oc get pods -n test1
oc get svc -n test1
oc get route -n test1

Collect all access data.

Project nginx pod Service Cluster IP Route
test app-68bb6db796-c4t6k 172.30.128.219 app-test.apps.boreal.cp.fyre.ibm.com
test1 app-68bb6db796-9t9df 172.30.184.11 app-test1.apps.boreal.cp.fyre.ibm.com

Verify that nginx is accessible externally.

curl -s app-test.apps.boreal.cp.fyre.ibm.com | grep Thank

<p><em>Thank you for using nginx.</em></p>

Verify that nginx services can access each other internally.
Enter service in test.

oc -n test rsh app-68bb6db796-c4t6k

Call service in project test1

curl -s 172.30.184.11 | grep Thank

<p><em>Thank you for using nginx.</em></p>

Multitenancy

Now we want to harden policy, internal traffic is allowed only between pods in a single project.

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: deny-all
spec:
  podSelector: {}

oc create -n test -f https://raw.githubusercontent.com/stanislawbartkowski/CP4D/main/miltitenancy/deny-all.yaml
oc create -n test -f https://raw.githubusercontent.com/stanislawbartkowski/CP4D/main/miltitenancy/deny-all.yaml

Test again.

oc -n test rsh app-68bb6db796-9t9df
curl -s 172.30.184.11

Cancel with <CTRL>C.

Check traffic inside project test1.

curl -s app-68bb6db796-c4t6k | grep Thank

<p><em>Thank you for using nginx.</em></p>

Try external access.

curl -s app-test.apps.boreal.cp.fyre.ibm.com

Break with <Ctrl>C. The external access using route is blocked because the deny-all policy is very restrictive and does not differentiate between traffic from OpenShift native services like router and developer projects.
We need to soften restriction and allow traffic from OpenShift router and monitoring services.

oc get project openshift-ingress --show-labels

NAME                DISPLAY NAME   STATUS   LABELS
openshift-ingress                  Active   name=openshift-ingress,network.openshift.io/policy-group=ingress,olm.operatorgroup.uid/4e616a48-8931-4b71-8dd4-5c5232a40fa7=,openshift.io/cluster-monitoring=true

oc get project openshift-monitoring --show-labels

NAME                   DISPLAY NAME   STATUS   LABELS
openshift-monitoring                  Active   name=openshift-monitoring,network.openshift.io/policy-group=monitoring,olm.operatorgroup.uid/4e616a48-8931-4b71-8dd4-5c5232a40fa7=,openshift.io/cluster-monitoring=true

openshift-ingress project managing route traffic is labelled network.openshift.io/policy-group=ingress and openshift-monitoring is labelled network.openshift.io/policy-group=monitoring

Network policy to release router service allows traffic from project labelled network.openshift.io/policy-group: ingress

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
   name: allow-from-openshift-ingress
spec:
   ingress:
   - from:
     - namespaceSelector:
         matchLabels:
             network.openshift.io/policy-group: ingress
   podSelector: {}
   policyTypes:
   - Ingress

oc create -f https://raw.githubusercontent.com/stanislawbartkowski/CP4D/main/miltitenancy/allow-from-openshift-ingress.yaml -n test
oc create -f https://raw.githubusercontent.com/stanislawbartkowski/CP4D/main/miltitenancy/allow-from-openshift-ingress.yaml -n test1

oc create -f https://github.com/stanislawbartkowski/CP4D/raw/main/miltitenancy/allow-from-openshift-monitoring.yaml -n test
oc create -f https://github.com/stanislawbartkowski/CP4D/raw/main/miltitenancy/allow-from-openshift-monitoring.yaml -n test1

Test again.

curl -s app-test.apps.boreal.cp.fyre.ibm.com | grep Thank curl -s app-test1.apps.boreal.cp.fyre.ibm.com | grep Thank

<p><em>Thank you for using nginx.</em></p>

Make exception

Assume we want to give access to custom app-share service in test1 to a specific ubi8 service in test. Only this inter-project traffic is allowed, all other restrictions are in place.

oc new-app --name app-share --docker-image docker.io/library/nginx:latest -n test1
oc set serviceaccount deployment app-share supersa -n test1
oc -n test run ubi8 --image=registry.redhat.io/ubi8/ubi --command -- /bin/bash -c 'while true; do sleep 3; done'

 oc get service app-share -n test1
NAME        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
app-share   ClusterIP   172.30.155.211   <none>        80/TCP    51s

As admin user.

oc label namespace test name=test

As developer user.

oc get project test --show-labels

NAME   DISPLAY NAME   STATUS   LABELS
test                  Active   name=test

Prepare exception policy. The police opens traffic to app-share pod from pod run=ubi8 deployed in project labelled name=test and on 80 port only.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
   name: network-exception
spec:
   podSelector:
      matchLabels:
        deployment: app-share
   ingress:
   - from:
     - namespaceSelector:
         matchLabels:
           name : test
       podSelector:
         matchLabels:
           run: ubi8
         ports:
           - port: 80
             protocol: TCP

oc create -n test1 -f https://github.com/stanislawbartkowski/CP4D/raw/main/miltitenancy/network-exception.yaml

Test access.

oc -n test rsh ubi8
curl -s 172.30.155.211 | grep Thank

<p><em>Thank you for using nginx.</em></p>

Check access to another service in test1

curl -s 172.30.184.11

Access is blocked.

⚠️ **GitHub.com Fallback** ⚠️