_Configuring Persistent Storage - blackducksoftware/hub GitHub Wiki
Important Note: The procedure described on this page has been deprecated.
This page describes how to configure Black Duck to use persistent volume claims in Kubernetes/OpenShift when Black Duck is not deployed with Synopsys Operator.
Do not use these instructions for Black Duck instances deployed with Synopsys Operator.
The documentation below is provided only for historical purposes, and should not be followed unless instructed to by Synopsys support personnel.
Deprecated: Configuring Persistent Storage
Replacing emptyDir with Persistent Storage
By default, the Black Duck installation will create containers that use "emptyDir" for storage. emptyDir persists for a pod as long as it runs on a node, but if the pod is removed from node, then the data in emptyDir is deleted.
You can replace emptyDir with persistentVolumeClaim, HostPath, or other Volume types to provide your Black Duck instance with persistent storage. The following containers would need to be modified to give them persistent storage:
postgres-persistent-vol 100G (or more, talk to support if you have large scan volumes)
dir-webserver 100M
dir-webapp 100M
solr-dir 100M
dir-registration 100M
dir-zookeeper 100M
dir-scan 100M
dir-authentication 100M
If you are well-versed in Kubernetes storage, you can use the above as a guideline and easily setup storage for these emptyDir volumes on your own. If not, or if you simply need a refresher on how Kubernetes volumes and PVCs work, read on.
Example
The following describes how to configure persistent storage for the CFSSL container in a production scenario.
In this article, we will present two examples of how to use persistent storage for cloud native environments with OpsSight. One with with NFS based storage and one with dynamic storage.
Example of attaching an NFS PVC
For NFS, a common pattern is to have a large filer dedicated to individual apps, or teams. Although dynamic storage is easier to maintain for large scale clusters, NFS is often used because it is simple, and backwards compatible with other storage idioms.
To setup NFS as your base storage platform for the Hub, you can do the following:
- Set up an NFS storage filer.
- Create a
blackduck
(or other obvious root directory) under your exported directory (you can find this in /etc/exports). - Create a persistent volume, and a corresponding claim, like so:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-bd-hub-cfssl
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
server: 10.128.0.11
path: "/data/blackduck/cfssl"
And now, create several persistent volume claims, one for each volume you are replacing:
Note, you'll want to replace "cfssl" with a pod name that resembles the volume you are replacing.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvclaim-bd-hub-cfssl
annotations:
volume.beta.kubernetes.io/storage-class: ""
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
In the above example, the directory for CFSSL cert storage will be reachable in your NFS filer /data/blackduck/cfssl
.
Example of attaching a Dynamic Storage PVC
For dynamic storage, you can follow the exact same instructions above, with two exceptions:
- Replace
ReadWriteMany
withReadWriteOnce
in each claim. - Remove
volume.beta.kubernetes.io/storage-class: ""
entirely, its not needed for dynamic storage.
The nice thing here is, you don't need to create a volume at all!
Thus, for dynamic storage, you will just create PersistentVolumeClaims for each directory in the deployments. Make sure that all of your PVCs are bound, running kubectl get pvc
in your hub namespace.
That is, you will make the following PVC, without making any corresponding PV, to set up storage for cfssl.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvclaim-bd-hub-cfssl
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Mi
Finally, set the PVC stanza in the corresponding configuration YAML file.
Regardless of whether your PVC was created with a backing volume, or with dynamic storage, you will need to modify the PVC for your container once you set its storage up.
spec:
volumes:
- persistentVolumeClaim:
claimName: pvclaim-bd-hub-cfssl
name: pv-bd-hub-cfssl
containers:
Further down the configuration YAML file - in the same service stanza - there should be a volume mount stanza of the form:
volumeMounts:
- mountPath: /xxxx/yyyy/zzzz
name: VOLUME_NAME
Ensure that the VOLUME_NAME here in the volume mount section matches the VOLUME_NAME in the previous step. They must match, as it is this common VOLUME_NAME that associates the claim with the mount. If they don't match, change VOLUME_NAME here to match the name you used in the previous step.
Verifying Success
After making this configuration and starting your Black Duck containers, you should see that each one of your pods schedules with attached storage. To make sure your PVCs are correctly working, you should see something like this:
[19:30:49] training:hub git:(master*) $ kubectl get pvc -n myhub
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
gce-dir-cfssl Bound pvc-76cc17b1-4fcf-11e8-a4df-42010a80007a 1Gi RWO standard 15m