_Black Duck Installation Parameters - blackducksoftware/hub GitHub Wiki
This page has been deprecated. Please see the official Kubernetes Black Duck Installation Guide here.
When using Synopsys Operator to install a Black Duck instance, there are many parameters that can be provided to customize the installation.
This page lists the parameters by task, and gives instructions for configuring them from the Console UI and the command line.
The following tasks are covered:
- Black Duck Registration Key
- Black Duck Namespace
- Black Duck Size
- Black Duck Type
- Persistent Storage
- External Database
- Cloning a Database
- Web Server Certificates
- Custom Certificate Authority for Client Authentication
- Changing Web Server Ports and Hosts
- Disabling IPv6
- Enabling a Proxy
- Liveness Probes
- Installing from a Different Registry
- Changing the Black Duck Version
- Black Duck Binary Analysis
- Other Black Duck Configuration
Note: There are some Black Duck configurations that can only be made manually (and not with Synopsys Operator). See Other Black Duck Configuration for details.
No matter whether you install Black Duck from the command line or with the console UI, you must provide a Black Duck license key; otherwise, the registration portion of the deployment will fail. In the console UI, provide the key in the "License Key" field. From the command line, provide the key in the "licenseKey" field.
You can configure the name of the namespace you want to install Black Duck in. Generally, you want each Black Duck instance to be in its own namespace.
In the Console UI, provide the namespace in the "Namespace" field.
In the sample JSON files, change the "namespace" field. Note: It is recommended that you change the "name" field (in the metadata) to match the namespace field.
You can specify how many resources in the cluster will be allocated to the Black Duck instance. The option you pick affects:
- Number of CPU cores
- Amount of Memory
- The number of replicas of certain Black Duck containers (JobRunner, Black Duck Scan, etc.)
- The maximum number of concurrent scans
The table below specifies what is allocated for a Black Duck deployment of a given size:
|Size |CPU Cores |Memory |Num JobRunners |Num Scans |
+--------+----------+-------+---------------+----------+
|Small |8 |32G |1 |1 |
|Medium |12 |64G |4 |2 |
|Large |14 |144G |6 |3 |
|X-Large |20 |224G |10 |5 |
In the Console UI, the "Size" field can be set to "Small", "Medium", "Large", and "X-Large". The same values can be set in the sample JSON file in the "size" field. Pick the size that matches your cluster's resources.
This field is used for the OpsSight Connector only. This field should remain blank if you do not use the OpsSight Connector.
This field is used to indicate whether you want this Black Duck instance to be send scan data by the OpsSight Connector. By default, the OpsSight Connector will send scan data to all Black Duck instances that:
- exist in the cluster
- were installed using Synopsys Operator
- have a "type" value that exactly matches that of the OpsSight Connector
If you would like this Black Duck instance to receive OpsSight Connector scan data, ensure that this "type" value matches the value that was set when the OpsSight Connector is installed. (The default value for the OpsSight Connector is "worker".)
Synopsys Operator can create persistent volume claims that will ensure that Black Duck's data persists even if its containers are stopped and restarted. For a full discussion of this complex topic, see the Persistent Volume Considerations page.
To enable use of persistent volumes by Black Duck, check the "Persistent Storage" checkbox in the Synopsys Operator Console UI. If editing the example JSON files, set "persistentStorage: true".
If Persistent Storage is enabled, you must also provide the following values:
If you have multiple classes (types) of persistent volumes defined in your cluster, you can specify the class of storage for the volume that is claimed by Black Duck. In the Synopsys Operator Console UI, the "PVC Storage Class" drop-down selector will be populated with the storage classes defined in your cluster, and you can simply select one. If using JSON files, the "pvcStorageClass" value must be set to the exact name of the storage class. To see the names of defined storage classes, use the command:
kubectl get storageclass
This field exists solely to help populate the Postgres Claim Size (described below) in the Console UI. If you are using the Console UI, then:
- If you are using OpsSight to scan containers, then select "Images". This will request a large database.
- If you are scanning source code, then select "Artifacts". This will request a smaller database.
- Select "Custom" if you intend to fill out the size manually, above. If you are editing the JSON file, you can ignore this field.
You must state how much storage will be requested by Black Duck's Postgres instance from the cluster's underlying persistent storage class. If you are using the Console UI, this field will be auto-populated for you based on the "Scan Type" (see above). If you are editing the JSON file, specify "150Gi" if you are scanning source files, and "1000Gi" if you are using OpsSight to scan images. Consult your authorized Synopsys support representative for guidance on database sizing. In the JSON file, create a stanza with the following form:
"pvc": [
{
"name": "blackduck-postgres",
"size": "150Gi"
}
],
Please note that the name of the PVC must be "blackduck-postgres".
(Note: Much more information on database configuration can be found on the Database Considerations page.)
You can configure your Black Duck deployment to use an external database.
Important note: Before you can use this option, you must properly configure the external database. For instructions, please reference the section entitled "Configuring an external PostgreSQL instance" in the Black Duck Compose installation documentation, which can be found here.
Once you have properly configured the external database, you must supply the following parameters appropriately:
If using the Console UI, check the "External Database" checkbox. If using a JSON file, ensure that you have an appropriate "externalPostgres" stanza (see the "hub_external_database.json" example file for an example).
This is the IP Address (or hostname) of the external database.
This is the port of the external database.
This is the admin user id of the external database. (This is usually "blackduck".)
This is the user id of the external database that Black Duck uses to access the database. (This is usually "blackduck_user".)
Check this checkbox (or set the JSON value to true) if the database connection uses SSL.
Password for the admin user. This information is put in a Kubernetes secret.
Password for the regular DB user. This information is put in a Kubernetes secret.
(Note: Much more information on external databases can be found on the Database Considerations page.)
If you have a pre-existing Black Duck instance in your cluster that was deployed with Synopsys Operator, then you can clone that instance's database and use it to populate the new Black Duck instance.
In the Console UI, the "Clone DB" drop-down provides a list of Black Duck instances. Select a Black Duck instance from this list to clone its database, or choose "None" to start with a fresh database. If using a JSON file from the command line, you can provide the Black Duck name in the "dbPrototype" field (having no "dbPrototype" entry in the JSON file will cause a fresh database to be created).
Each Black Duck web UI responds to HTTPS requests, and therefore, needs a certificate for authentication. You can either have Synopsys Operator create a self-signed certificate for you, or, you can provide a certificate of your own. In either case, the certificate will be made available to the Black Duck server via a Kubernetes secret.
To choose a self-signed cert, choose "default" in the "Certificate Name" field of the Console UI. In the JSON file, set the "certificateName" field to "default".
If you would like to provide your own custom certificate, set the value to "manual" rather than "default". In this case, you must specify two other field values:
In this field, place all the base64-encoded X.509 certificate information for your custom cert. (Get this from your PEM file. You can include the "----- Begin" and "----- End" comments.)
In this field, place all base64-encoded private key of your custom cert. (Get this from your PEM file. You can include the "----- Begin" and "----- End" comments.) If using the JSON file, you may wish to delete the JSON file after deploying Black Duck to avoid compromising the private key.
Each Black Duck instance provides an API that can be accessed by a client via a TLS connection. Each client must authenticate with the Black Duck instance, and one supported form of authentication is certificate auth. When the client passes the certificate to the server during the TLS handshake, the server must verify that the client certificate itself was signed by a trusted authority. You can provide a certificate for the trusted authority so the server will trust your clients. To provide a custom certificate-authority certificate, provide the certificate details under the "Custom CA for certificate authentication" field in the console UI, or the "authCustomCa" field in a JSON file. The value should be base64 encoded, for example:
authCustomCa: |
-----BEGIN CERTIFICATE-----
MIIE1DCCArwCCQCuw9TgaoBKVDANBgkqhkiG9w0BAQsFADAsMQswCQYDVQQGEwJV
UzELMAkGA1UECgwCYmQxEDAOBgNVBAMMB1JPT1QgQ0EwHhcNMTkwMjA2MDAzMjM3
...
o1UGNrGgU1rK8K+/2htOH9JhlPFWHQkk+wvGL6fFI7p+6TGes0KILN4WioOEKY0t
0V1Zr8bejDW49cu1Awy443SrauhFLOInubZLA8S9ZvwTVIvpmTDjdQ==
-----END CERTIFICATE-----
Note: The CN field of the client certificate must match an existing user name.
By default, Black Duck's web server is publicly available on localhost port 443, and internal communication is through port 8443. Keeping the defaults is recommended, but they can be changed via Environment Variables.
If using the Console UI, then edit the "PUBLIC_HUB_WEBSERVER_HOST", "PUBLIC_HUB_WEBSERVER_PORT", and "HUB_WEBSERVER_PORT" to the values of your choice.
If editing a JSON file, make sure your "spec" stanza has text of the form:
"environs": [
"PUBLIC_HUB_WEBSERVER_HOST:localhost",
"PUBLIC_HUB_WEBSERVER_PORT:443",
"HUB_WEBSERVER_PORT:8443"
],
Black Duck expects IPv6 to be enabled in order to function properly. If you want to install a Black Duck instance in a cluster that has IPv6 disabled, then two environment variables must be set.
If using the Console UI, then:
- Edit "IPV4_ONLY" and set the value to "1".
- Add a "BLACKDUCK_HUB_SERVER_ADDRESS" variable and set the value to "0.0.0.0".
If editing a JSON file, make sure your "spec" stanza has text of the form:
"environs": [
"IPV4_ONLY:1",
"BLACKDUCK_HUB_SERVER_ADDRESS:0.0.0.0"
],
If you need to configure Black Duck to work with external proxies (i.e., you need to access Black Duck's registration or Knowledge Base services on the external Internet), then work with your proxy server administrator to help set the environment variables discussed, below.
If using the Console UI, then add the following three Environment Variables:
- "HUB_PROXY_SCHEME"
- "HUB_PROXY_HOST"
- "HUB_PROXY_PORT" with the values of your choice.
If editing a JSON file, make sure your "spec" stanza has text of the form:
"environs": [
"HUB_PROXY_SCHEME:https",
"HUB_PROXY_HOST:proxy.my.company",
"HUB_PROXY_PORT:8080"
],
You can configure Synopsys Operator to enable liveness probes, which are like a health check. If enabled, Kubernetes/OpenShift itself will be configured to verify that all Black Duck containers are running properly and will restart them if they fail. (By default, Kubernetes will detect when a container or pod has exited (and attempt to restart the container in that case), but these Liveness Probes actually detect whether the application running inside each container is healthy, and will restart it if not.)
By default, Liveness Probes are disabled. To enable the Liveness Probes, either check the "Liveness Probes" checkbox in the Console UI, or if using a JSON file, set "livenessProbes" to "true".
By default, Synopsys Operator will pull images from Docker Hub. If you wish to pull images from a secure registry you must first put the Black Duck images in that registry. For instructions, see Pushing Black Duck Images to a Secure Registry.
Once the Black Duck images are in the secure registry, you must do several things, listed below.
See the Creating an Image Pull Secret page for guidance creating an image pull secret, which will store the credentials needed to access the secure registry.
Now that you have the Black Duck images in the secure registry and have created the image pull secret, you can now edit the images tags to point to that registry.
If using the Console UI, go to the Container Image Tags section, and edit the values, adhering to the following format:
<registry>/<project>/<image-name>:<version>
For example, you might change:
blackducksoftware/blackduck-cfssl:1.0.0
to
registry.connect.redhat.com/blackducksoftware/blackduck-cfssl:1.0.0
If you wish to either:
- install a version of Black Duck different from the defaults in the console UI / JSON files, or,
- upgrade a Black Duck server to a new version
then you must do two things:
- edit the HUB_VERSION environment variable
- edit the Image Tags
Note: Before modifying these values, please contact Black Duck support to get the proper version value for each and every Black Duck container.
This variable is used by the Black Duck instances to know what version it is and which version of the scan client to use.
If using the Console UI, then edit the "HUB_VERSION" variable as desired.
If editing a JSON file, make sure your "spec" stanza has text of the form:
"environs": [
"HUB_VERSION:2018.11.0"
],
If using the Console UI, modify each Container Image Tag value to match the version you wish to install. If editing a JSON file, edit the values appropriately in the "imageRegistries" stanza.
To enable Black Duck Binary Analysis (BDBA), there are two steps:
- Ensure that the "USE_BINARY_UPLOADS" environment variable is set to "1"
- Ensure that you include the "appcheck-worker", "rabbitmq" and "blackduck-upload-cache" images tags. Instructions for each step is provided below.
If using the Console UI, then ensure that the following line appears in the Environment Variables list:
"USE_BINARY_UPLOADS:1"
If editing a JSON file, make sure your "spec" stanza has text of the form:
"environs": [
"USE_BINARY_UPLOADS:1"
],
If using the Console UI, ensure that you specify the following Container Images Tags:
docker.io/blackducksoftware/appcheck-worker:1.0.1
docker.io/blackducksoftware/rabbitmq:1.0.0
docker.io/blackducksoftware/blackduck-upload-cache:1.0.3
If editing a JSON file, make sure your "spec" stanza has text of the form:
"imageRegistries": [
"docker.io/blackducksoftware/appcheck-worker:1.0.1",
"docker.io/blackducksoftware/rabbitmq:1.0.0",
"docker.io/blackducksoftware/blackduck-upload-cache:1.0.3"
],
There are some Black Duck configurations that can only be made manually (and not with Synopsys Operator). These configurations include persistent volumes for Postgres, and using an external database. To learn how to install Black Duck without using Operator, see Installing Black Duck in Clusters not Running the Operator.