WARP in k8s - cniackz/public GitHub Wiki
To show how to run WARP in k8s environment.
-
Make sure to have an optimal configuration for MinIO
-
One of the things that is needed for this optimal configuration is to have unique set of disks per MinIO Server, for example:
$ k directpv list volumes --pod-namespaces mavenir-february
┌──────────────────────────────────────────┬──────────┬─────────────┬──────────┬───────────────────────────────────┬──────────────────┬─────────┐
│ VOLUME │ CAPACITY │ NODE │ DRIVE │ PODNAME │ PODNAMESPACE │ STATUS │
├──────────────────────────────────────────┼──────────┼─────────────┼──────────┼───────────────────────────────────┼──────────────────┼─────────┤
│ pvc-91bbeb48-a68c-4b99-8977-6c3b7fc2d4fa │ 8.0 TiB │ minio-k8s17 │ nvme4n1 │ mavenir-testing-february-pool-0-1 │ mavenir-february │ Bounded │
│ pvc-3e255b0a-9593-405f-af0a-a9d610ac8fb8 │ 8.0 TiB │ minio-k8s17 │ nvme5n1 │ mavenir-testing-february-pool-0-1 │ mavenir-february │ Bounded │
│ pvc-fa371ef8-a792-459e-9154-899a3b782066 │ 8.0 TiB │ minio-k8s17 │ nvme7n1 │ mavenir-testing-february-pool-0-1 │ mavenir-february │ Bounded │
│ pvc-b24fc171-9eb8-44cc-bb38-6e8b580dbdf9 │ 8.0 TiB │ minio-k8s17 │ nvme9n1 │ mavenir-testing-february-pool-0-1 │ mavenir-february │ Bounded │
│ pvc-2032b979-42e3-41be-a69f-c1a2e76ad0eb │ 8.0 TiB │ minio-k8s18 │ nvme10n1 │ mavenir-testing-february-pool-0-0 │ mavenir-february │ Bounded │
│ pvc-bb494cad-684a-4a27-9ada-ba75aa0c3d06 │ 8.0 TiB │ minio-k8s18 │ nvme1n1 │ mavenir-testing-february-pool-0-0 │ mavenir-february │ Bounded │
│ pvc-863e00da-c22a-4b0b-86c3-91d17ed8c053 │ 8.0 TiB │ minio-k8s18 │ nvme3n1 │ mavenir-testing-february-pool-0-0 │ mavenir-february │ Bounded │
│ pvc-cdcedcec-decb-4019-9e9a-34b5c6c8ba7d │ 8.0 TiB │ minio-k8s18 │ nvme4n1 │ mavenir-testing-february-pool-0-0 │ mavenir-february │ Bounded │
│ pvc-6fefdfe2-599c-4c24-9f12-c71366688ec2 │ 8.0 TiB │ minio-k8s19 │ nvme10n1 │ mavenir-testing-february-pool-0-2 │ mavenir-february │ Bounded │
│ pvc-10c59f8d-6166-48fc-b288-b3566a89b162 │ 8.0 TiB │ minio-k8s19 │ nvme1n1 │ mavenir-testing-february-pool-0-2 │ mavenir-february │ Bounded │
│ pvc-1deebb51-c962-43b8-a318-69b0e8e20554 │ 8.0 TiB │ minio-k8s19 │ nvme3n1 │ mavenir-testing-february-pool-0-2 │ mavenir-february │ Bounded │
│ pvc-fdc4b9c0-95b6-4b93-94a1-415a6562cccf │ 8.0 TiB │ minio-k8s19 │ nvme4n1 │ mavenir-testing-february-pool-0-2 │ mavenir-february │ Bounded │
│ pvc-fe3f4b51-1ecb-40eb-b780-513a81b96029 │ 8.0 TiB │ minio-k8s20 │ nvme12n1 │ mavenir-testing-february-pool-0-3 │ mavenir-february │ Bounded │
│ pvc-712746d4-90d2-4912-a697-e6fa9315a987 │ 8.0 TiB │ minio-k8s20 │ nvme2n1 │ mavenir-testing-february-pool-0-3 │ mavenir-february │ Bounded │
│ pvc-ab873452-6e13-40dc-b4da-312311c86be5 │ 8.0 TiB │ minio-k8s20 │ nvme3n1 │ mavenir-testing-february-pool-0-3 │ mavenir-february │ Bounded │
│ pvc-028f1363-9d4e-4882-97b3-02d44ee4fc45 │ 8.0 TiB │ minio-k8s20 │ nvme6n1 │ mavenir-testing-february-pool-0-3 │ mavenir-february │ Bounded │
└──────────────────────────────────────────┴──────────┴─────────────┴──────────┴───────────────────────────────────┴──────────────────┴─────────┘
- Notice each Node is having different disks per server, for example look at one node:
┬─────────────┬──────────┬
│ NODE │ DRIVE │
┼─────────────┼──────────┼
│ minio-k8s17 │ nvme4n1 │ <---- Different drive per volume, meaning unique disks.
│ minio-k8s17 │ nvme5n1 │
│ minio-k8s17 │ nvme7n1 │
│ minio-k8s17 │ nvme9n1 │
-
To achieve that you may use Node Selector or change the size of the volume
-
Also, and FYI the reason is we have to have unique set of disks otherwise IOPS will drop by 50%
- Deploy the WARP Service:
apiVersion: v1
kind: Service
metadata:
name: warp
namespace: <namespace>
labels:
app: warp
spec:
publishNotReadyAddresses: true
clusterIP: None
ports:
- port: 7761
name: warp
selector:
app: warp
- Deploy the WARP Clients:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: warp
namespace: <namespace>
labels:
app: warp
spec:
serviceName: warp
podManagementPolicy: Parallel
replicas: 4
selector:
matchLabels:
app: warp
template:
metadata:
name: warp
labels:
app: warp
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- warp
topologyKey: "kubernetes.io/hostname"
containers:
- name: warp
image: "minio/warp:latest"
imagePullPolicy: Always
args:
- client
ports:
- name: http
containerPort: 7761
- Deploy the WARP Job:
apiVersion: batch/v1
kind: Job
metadata:
name: warp-job
namespace: <namespace>
spec:
template:
spec:
containers:
- name: warp-job
env:
- name: WARP_ACCESS_KEY
value: "<MINIO-USER>"
- name: WARP_SECRET_KEY
value: "<MINIO-PASSWORD>"
image: "minio/warp:latest"
imagePullPolicy: Always
args: [ "get", "--tls", "--bucket", "benchmark-bucket", "--warp-client", "warp-{0...3}.warp.<namespace>.svc.cluster.local:7761", "--host", "<look-at-example-below>", "--concurrent", "64", "--obj.size", "32MiB" ]
restartPolicy: Never
backoffLimit: 4
-
--tls
is used because we are using TLS in this MinIO Example. - For the
--host
flag, use the headless service to talk to MinIO, for example:
### --host flag should have 4 entries:
mavenir-testing-february-pool-0-{0...3}.mavenir-testing-february-hl.mavenir-february.svc.cluster.local:9000
--------------------------------------- --------------------------- ---------------- ----
| | | |
| | | |____ Port of the headless Service.
| | |
| | |___ NameSpace
|____ Name of the Pods |
|______ Name of the Headless Service
- Once Job is completed, get the log:
warp: Connecting to ws://warp-0.warp.<namespace>.svc.cluster.local:7761/ws
warp: Client 192.168.93.102:7761 connected...
warp: Connecting to ws://warp-1.warp.<namespace>.svc.cluster.local:7761/ws
warp: Client 192.168.239.254:7761 connected...
warp: Connecting to ws://warp-2.warp.<namespace>.svc.cluster.local:7761/ws
warp: Client 192.168.13.217:7761 connected...
warp: Connecting to ws://warp-3.warp.<namespace>.svc.cluster.local:7761/ws
warp: Client 192.168.177.0:7761 connected...
warp: All clients connected...
warp: Requesting stage prepare start...
warp: Client 192.168.13.217:7761: Requested stage prepare start...
warp: Client 192.168.93.102:7761: Requested stage prepare start...
warp: Client 192.168.177.0:7761: Requested stage prepare start...
warp: Client 192.168.239.254:7761: Requested stage prepare start...
warp: Client 192.168.13.217:7761: Finished stage prepare...
warp: Client 192.168.93.102:7761: Finished stage prepare...
warp: Client 192.168.177.0:7761: Finished stage prepare...
warp: Client 192.168.239.254:7761: Finished stage prepare...
warp: All clients prepared...
warp: Requesting stage benchmark start...
warp: Client 192.168.93.102:7761: Requested stage benchmark start...
warp: Client 192.168.177.0:7761: Requested stage benchmark start...
warp: Client 192.168.239.254:7761: Requested stage benchmark start...
warp: Client 192.168.13.217:7761: Requested stage benchmark start...
warp: Running benchmark on all clients...
warp: Client 192.168.93.102:7761: Finished stage benchmark...
warp: Client 192.168.13.217:7761: Finished stage benchmark...
warp: Client 192.168.239.254:7761: Finished stage benchmark...
warp: Client 192.168.177.0:7761: Finished stage benchmark...
warp: Done. Downloading operations...
warp: Downloading operations...
warp: Client 192.168.239.254:7761: Operations downloaded.
warp: Client 192.168.177.0:7761: Operations downloaded.
warp: Client 192.168.93.102:7761: Operations downloaded.
warp: Client 192.168.13.217:7761: Operations downloaded.
warp: Benchmark data written to "warp-remote-2023-07-20[131723]-PrD2.csv.zst"
----------------------------------------
Operation: PUT. Concurrency: 256
* Average: 12069.77 MiB/s, 377.18 obj/s
Throughput, split into 23 x 1s:
* Fastest: 12.75GiB/s, 408.08 obj/s
* 50% Median: 12.12GiB/s, 387.96 obj/s
* Slowest: 9863.2MiB/s, 308.22 obj/s
----------------------------------------
Operation: GET. Concurrency: 256
* Average: 26768.82 MiB/s, 836.53 obj/s
Throughput, split into 298 x 1s:
* Fastest: 26.72GiB/s, 854.93 obj/s
* 50% Median: 26.19GiB/s, 838.18 obj/s
* Slowest: 20.70GiB/s, 662.39 obj/s
warp: Requesting stage cleanup start...
warp: Client 192.168.93.102:7761: Requested stage cleanup start...
warp: Client 192.168.13.217:7761: Requested stage cleanup start...
warp: Client 192.168.177.0:7761: Requested stage cleanup start...
warp: Client 192.168.239.254:7761: Requested stage cleanup start...
warp: Client 192.168.93.102:7761: Finished stage cleanup...
warp: Client 192.168.13.217:7761: Finished stage cleanup...
warp: Client 192.168.239.254:7761: Finished stage cleanup...
warp: Client 192.168.177.0:7761: Finished stage cleanup...
warp: Cleanup done.