Running a Ceph cluster with Rook on openSUSE Kubic - sebastian-philipp/test-rook-orchestrator GitHub Wiki

Running a Ceph cluster with Rook on openSUSE Kubic

Kubic is a certified Kubernetes distribution & container-related technologies built by the openSUSE community

Rook is a a Storage Orchestrator for Kubernetes. For general details, please refer to the official documentation

Setting up a Kubic cluster

Please refer to Kubic's installation documentation

To set up virtual machines with Kubic locally, please use the kubic-terraform-kvm project and refer to the README.

Kubic worker nodes should have some number of extra disks attached to use for Ceph storage In this example, the nodes have /dev/vdb, /dev/vdc, /dev/vdd, and /dev/vde

At this point, the kubectl command should be working and be connected to your Kubic cluster.

To verify your Kubic cluster, display all nodes known to Kubernetes

kubectl get nodes

Installation

Installing a Ceph cluster with Rook is done in three steps. First, we need to setup the Rook operator. Then, we need to initialize the cluster and finally, we need to configure the cluster with OSDs and gateways.

Download the Kubernetes Manifests

In order to properly set up a Ceph cluster with Rook, we will need to download Kubernetes Manifest files:

wget https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/common.yaml
wget https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/operator.yaml
wget https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/cluster-minimal.yaml
wget https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/toolbox.yaml

Where

  • common.yaml contains some common manifests, like the namespace and custom resource definitions.
  • operator.yaml contains the necessary manifests to run the Rook operator.
  • cluster-minimal.yaml contains a custom resource defining a simple Ceph cluster.
  • toolbox.yaml contains a POD for executing the ceph command line tool.

Now, we need to adjust the Kubernetes manifest files to your Kubic cluster.

FlexVolume Support

Rook still needs FlexVolumes to work

It is therefore necessary to change the configuration in the Rook manifest operator.yaml. Ensure the following config exists there. Please set the path where the Rook agent can find the flex volumes

- name: FLEXVOLUME_DIR_PATH
    value: /var/lib/kubelet/volumeplugins

Deploy Rook

Please apply the following two yaml files and wait, until the Rook operator is running successfully:

kubectl apply -f common.yaml -f operator.yaml

Rook-Ceph has 2 main components: the operator, which is run by Kubernetes and allows creation of Ceph clusters, and the Ceph cluster itself, which is created and partially managed by the operator.

It is also possible to follow the creation of the operator:

watch kubectl get pod -n rook-ceph

Configure the Ceph cluster

In order to use Ceph container images running openSUSE Leap, please change the cluser-minimal.yaml:

spec:
  cephVersion:
    allowUnsupported: false
    image: registry.opensuse.org/home/ssebastianwagner/rook-ceph/images/opensuse/leap:latest

Please note that the image location will be changed soon.

Create the Ceph cluster

Now, apply the cluseter

kubectl apply -f cluster-minimal.yaml

Watch the rook-ceph namespace to see the Ceph cluster begin to start up. You should eventually see 3 mons and 1 mgr when this is finished.

watch -c kubectl -n rook-ceph get pod

Now, apply the toolbox:

kubectl apply -f toolbox.yaml

Deploy OSDs

Please use the Ceph orchestrator to deploy OSDs on devices:

Switch to the toolbox:

kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{{.items[0].metadata.name}}')

Display all detected devices:

ceph orchestrator device ls

And then create OSDs for these devices, like

ceph orchestrator osd create kubic-1:vdb
ceph orchestrator osd create kubic-2:vdb

Now, you should have a running Ceph cluster within your Kubernetes cluster