Secure Splunk Forwarding with Mutual TLS using ADCS Certificates - cfloquetprojects/homelab GitHub Wiki
We have already configured SSL for HTTPS using certificates obtained from our Active Directory Certificate Services (ADCS)
web server in a previous guide so today we will be configuring the traffic encryption of a different segment of the Splunk Enterprise data collection architecture.
This lab will cover the configuration of encrypting the data transfer between universal forwarders residing on different endpoints (in our case will be Windows and RHEL 7 servers) and on-premise Splunk Enterprise
servers that will act as indexers/heavy forwarders in this scenario.
While Splunk does a good job of shipping with default self-signed certificates as a means of encrypting important communications like splunkd
out of the box, this is not up to snuff when it comes to production grade environments, especially if you already have ADCS
(or another pki solution) installed within your given environment.
We will be taking the extra step of leveraging Mutual TLS (mTLS) as a means of authenticating between our universal forwarder (clients) and our servers. In addition to this, we will be configuring SSL over mTLS in a load-balanced fashion between two distinct heavy forwarders/indexers, which will provide failover capabilities without sacrificing security.
I'd be remiss not to mention that from my conversations with Splunk Enterprise experts, this level of security is not common among even the largest Splunk customers, and thus explains the lack of detailed documentation that properly outlines all of the parameters that are required to pull this off, both on the ADCS, UF, and HF sides of things. Please reach out if you have any questions, as I have not seen any other documentation online that attempts to delve into this process in the same depth that we will do today.
-
How to obtain certificates signed by a third-party for inter-Splunk communication
-
Configure Splunk forwarding to use your own SSL certificates
We will be building off of several previous guides for this lab, with many requisite pieces of infrastructure in place such as an operational ADCS instance capable of issuing certificates via web portal, as well as a Splunk Enterprise instance installed on RHEL 7.
💡 If you are unsure how to install Splunk as a dedicated non-root user I've written a fairly in depth guide on this practice which can be found here
Also required for this lab are at least one Windows server as well as one RHEL 7 (linux) server that will be used for testing connections and configurations using SSL between the UFs residing on those hosts and our primary indexer.
Some helpful tools for troubleshooting and checking our progress along the way are tree
(which recursively shows files within a certain folder & subfolders) as well as net-tools
which allows us to check if certain ports are mapped to the correct applications and are indeed listening. We should also grab zip
& unzip
to compress some of the certificate files before sending them off to our Subordinate CA
to be approved. Install all four of these tools if you haven't already with the command below:
$ sudo yum install -y net-tools tree zip unzip
Lastly we will be using the same basic structure of deployment apps that was discussed in an earlier article I wrote, which covers the topic of creating custom apps for endpoints in more detail.
First things first, we need to organize the various files that we will be using to generate, secure, and distribute certificates within a single directory. This is not strictly necessary Splunk, but will help us organize and more easily access the different files we will need when we need them.
Since we have already generated certificates in a previous lab to apply employ an HTTPS connection for Splunk web instances, we will be using the $SPLUNK_HOME/etc/auth/ssl/s2s
directory as our folder to store SSL certificates in.
As our splunk user within the deployment server, create the following directory structure under /opt/splunk/etc/auth
:
$ pwd
/opt/splunk/etc/auth
$ mkdir -p ssl/s2s/{certs,keys,requests,configs}
This will create a new s2s
directory, as well as four subdirectories that are just to help organize the different types of files that we need to create in order to generate and concatenate our certificates later on.
💡 Splunk-to-Splunk (S2S) communication is part of the validated architectures document created and distributed by Splunk in January of 2021 to help give Splunk Admins a better idea of best practices, with this including the idea of encrypting said
S2S
communications using non-default (ADCS) certificates.
Now that we have our directory structure created, the next step is the creation of new private keys, which we will use to generate certificate requests (.csr) which will be sent to our Issuing/Subordinate CA
for approval, which can be done using the openssl
binary that is prepackaged in Splunk for ease-of-use:
💣 Important: Generate private keys for each of the certificate requests we will creating, which in our case is one for our primary HF, one for the secondary HF, and one for the shared client (UF) certificate.
💡 While you are able to use the
Triple Data Encryption Algorithm (3DES)
with a 2048 key length if this is your preferred method, it's worth considering switching to a more secure algorithm (like AES), or a longer key length like 4096 for the most secure installations.
$ pwd
/opt/splunk/etc/auth/s2s
$ /opt/splunk/bin/splunk cmd openssl genrsa -aes256 -out ./keys/splhf01_private.key 2048
$ /opt/splunk/bin/splunk cmd openssl genrsa -aes256 -out ./keys/splhf02_private.key 2048
$ /opt/splunk/bin/splunk cmd openssl genrsa -aes256 -out ./keys/spluf_private.key 2048
When prompted, enter a complex password that will protect that key from unauthorized viewing, the key that is generated from this command will be used to sign our Certificate Signing Request (CSR)
in the next step.
💣 A great way to screw things up fast is to forget this private key password, so make sure you have written it down somewhere safe or stored it in a dedicated password manager.
Before we create our new CSR
requests, let's make things easier on ourselves by defining all of the important configurations and parameters within dedicated .cnf
files stored within our configs
folder that will dictate to openssl
the options we'd like for each of the .csr
files:
💡 We are essentially required to do this because I couldn't find any other way to embed our desired name for SSL name checking other than creating a
.cnf
file that holds this information.
💣 Important: Create unique
.cnf
files for each of the certificate requests we will creating, which in our case is one for our primary HF, one for the secondary HF, and one for the shared client (UF) certificate.
$ pwd
/opt/splunk/etc/auth/ssl/s2s/configs
$ cat splhf01_req.cnf
[ req ]
default_bits = 2048
distinguished_name = req_distinguished_name
req_extensions = req_ext
prompt = no
[ req_distinguished_name ]
countryName = <country abbrev>
stateOrProvinceName = <state abbrev>
localityName = <city>
organizationName = <orgname>
commonName = <FQDN of Splunk server>
[ req_ext ]
subjectAltName = @alt_names
[alt_names]
DNS.1 = <FQDN of Splunk server>
Now that we have our .cnf
files created, generating some CSRs for our dedicated indexer/heavy forwarders is very simple, using the .key
file that was created in the previous step we can again employ the openssl
command that comes natively within Splunk cmd to request a new .csr
, shown below:
$ pwd
/opt/splunk/etc/auth/s2s
$ /opt/splunk/bin/splunk cmd openssl req -new -key ./keys/splhf01_private.key -out ./requests/splhf01_req.csr -config ./configs/splhf01_req.cnf
$ /opt/splunk/bin/splunk cmd openssl req -new -key ./keys/splhf02_private.key -out ./requests/splhf02_req.csr -config ./configs/splhf02_req.cnf
$ /opt/splunk/bin/splunk cmd openssl req -new -key ./keys/spluf_private.key -out ./requests/spluf_req.csr -config ./configs/spluf_req.cnf
Confirm that you have created/placed all of the requisite files in the proper folder prior to moving onto submitting your .csr
to your ADCS
server for approval.
💣 Check the directory and file structure shown below to ensure that you have all of the requisite files (needed for your own installation) to proceed forward. If you proceed forward with some of these files missing you will receive errors at some point or another!
[splunk@splds01 s2s]$ pwd
/opt/splunk/etc/auth/ssl/s2s
[splunk@splds01 s2s]$ tree
.
├── certs
├── configs
│ ├── splhf01_req.cnf
│ ├── splhf02_req.cnf
│ └── spluf_req.cnf
├── keys
│ ├── splhf01_private.key
│ ├── splhf02_private.key
│ └── spluf_private.key
└── requests
├── splhf01_req.csr
├── splhf02_req.csr
└── spluf_req.csr
Now that we have created our .csr
files, submit them to your existing Windows Certificate Authority server using the web portal that was configured in previous labs of mine.
💡 An easy way to get these certificates transferred to a Windows based host that has access to this web-enrollment is simply using SSH to remotely access your
Deployment Server
from that host, and just copy the.csr
files directly from the command line into the dialog box usingcat
to display them.
Within the web portal for ADCS hosted by our Subordinate/Issuing CA
navigate to Request a Certificate
before choosing to submit an Advanced Certificate Request
and finally copying and pasting each .csr
file from our dedicated deployment server into the dialog box (shown below):
💣 Warning: It is absolutely critical that you choose the proper certificate templates prior to submitting the request. Certificates for each heavy forwarder should only have the
server
role/purpose defined, while the Universal Forwarder certificate needs to be multi-purpose, with bothserver
andclient
roles.
While we are signed into our certificate server, we will also need to retrieve the public CA of the Subordinate/Issuing CA
itself, as well as the Root CA
public cert, and any intermediary CAs in between them in the PKI hierarchy. These will be used to to validate the authenticity of the certificates it will provide us to the clients that our indexer/heavy forwarder will interact with regularly.
Do this by returning to the Home
page of the ADCS web portal and choosing to Download a CA Certificate
Then choose again to Download CA Certificate
on the following screen:
💣 Ensure you grab the public certificate for both your
Root CA
as well as every subordinate CA in the PKI hierarchy between your Splunk server and theRoot CA
, since we we will need to build a "chain" of concatenated certificates for our internal PKI authentication architecture.
Perform a secure copy of these certificates using a tool like scp
to transmit them via command line back to the Splunk Enterprise
instance that the request was initially from (aka the Splunk indexer/heavy forwarder that will be receiving traffic from the universal forwarders.
💡 To simplify things a bit it makes the most sense to put all of the aforementioned certificates within a folder, and then simply compress that folder and use
scp
to transfer the.zip
file to ourDeployment Server
in order to deploy them via apps to ourHeavy Forwarder
andUniversal Forwarder
instances later on.
PS> Compress-Archive -Path .\SplunkS2SCerts\ -DestinationPath .\SplunkS2SCerts.zip
PS> scp -r SplunkS2SCerts.zip user@splds01:
Once the certificate has been successfully transferred back to the deployment server, make sure that the dedicated splunk
user either has access to it or place it within Splunkshome directory so that it is able to perform the necessary operations to transform the
DERcertificate to the
PEM` format that Splunk is expecting:
💣 Since the
.zip
file was transferred to a user other thansplunk
, you will need to use something likechown
to transfer ownership of the.zip
file tosplunk
before moving the files to the home directory.
$ whoami
splunk
$ unzip *.zip
$ cp ~/*.cer /opt/splunk/etc/auth/ssl/s2s/certs
Now would be a good time to ensure that we've gotten all of the requisite certificates that we need from our Subordinate/Issuing CA
, which can be done by simply issuing another tree
command to the extracted folder sitting in our /opt/splunk/etc/auth/ssl/s2s/certs
folder:
💡 Keep in mind, that if you only have a single root CA, you will only need that
.cer
file when you concatenate the certs later on, however if you are operating with a two or three (or more) tier PKI hierarchy (for this lab we are just using two) you will need the public certificate of both theRoot CA
as well as theIssuing CA
for proper concatenation.
[splunk@splds01 certs]$ tree
.
├── rootca_pub.cer
├── splhf01_pub.cer
├── splhf02_pub.cer
├── spluf_pub.cer
└── subca_pub.cer
Splunk strictly uses the Privacy Enhanced Mail (PEM)
certificate format, so we will need to again use openssl
to modify our existing .cer
files into the proper .pem
format that Splunk is expecting, after which we can remove the original .cer
files:
$ pwd
/opt/splunk/etc/auth/ssl/s2s/certs
$ /opt/splunk/bin/splunk cmd openssl x509 -in splhf01_pub.cer -inform DER -out splhf01_pub.pem -outform PEM
$ /opt/splunk/bin/splunk cmd openssl x509 -in splhf02_pub.cer -inform DER -out splhf02_pub.pem -outform PEM
$ /opt/splunk/bin/splunk cmd openssl x509 -in spluf_pub.cer -inform DER -out spluf_pub.pem -outform PEM
$ /opt/splunk/bin/splunk cmd openssl x509 -in rootca_pub.cer -inform DER -out rootca_pub.pem -outform PEM
$ /opt/splunk/bin/splunk cmd openssl x509 -in subca_pub.cer -inform DER -out subca_pub.pem -outform PEM
$ rm -rf *.cer
Using the tree
command we can view all the files that should be present within our $SPLUNK_HOME/etc/auth/ssl/s2s/certs
folder now that we have completed the transfer of ADCS certificates onto our Deployment Server
, and changed them into the proper .pem
format before removing the unusable .cer
files:
[splunk@splds01 certs]$ tree
.
├── rootca_pub.pem
├── splhf01_pub.pem
├── splhf02_pub.pem
├── spluf_pub.pem
└── subca_pub.pem
Using the files shown above, we will be able to configure our Universal Forwarders
, and even other Splunk Enterprise
instances to encrypt splunk-to-splunk (or S2S) communications in the next part of this guide.
Our heavy forwarders (as well as UF clients) will need concatenated
certificates, or distinct .pem
files which are composed of several certificates/keys used for authentication. Prepare the certificate that will be used by the Universal Forwarder
by combining three of the files within our /opt/splunk/etc/auth/ssl/s2s
folder, in the following order:
- The HF/UF client/server certificate (splhf01_pub.pem, splhf02_pub.pem, spluf.pem).
- The encrypted private key for your server certificate (splhf01_private.key, splhf02_private.key, spluf_private.key)
- Concatenated certificate chain for intermediate and root certificates (adcs.pem)
💡 If you are more curious about this process, or having trouble getting it working, the notes for this particular section are taken primarily from this Splunk Docs post, concerning the preparation of third party certs for SSL communication.
First, let's combine our Root CA
and Subordinate CA
certificates in the proper order (root certificate should always be at the bottom) in order to append them to other certificates easily:
$ pwd
/opt/splunk/etc/auth/ssl/s2s/certs
$ cat subca_pub.pem rootca_pub.pem > adcs.pem
Now, we will be combining our issued .pem
files, along with their associated encrypted private keys, with our existing ADCS
certificate chain in order to create our final server cert for each indexer/forwarder.
$ cat splhf01_pub.pem ../keys/splhf01_private.key adcs.pem > splhf01.pem
$ cat splhf02_pub.pem ../keys/splhf02_private.key adcs.pem > splhf02.pem
$ cat spluf_pub.pem ../keys/spluf_private.key adcs.pem > spluf.pem
In order to effectively control and deploy both the HF/UF sides of communication without having to directly "touch" either party, we will need to use our existing deployment server to distribute similar but distinctly different apps
to them.
💣 It's very important to have opened up port 8089/tcp on our designated deployment server, as this is the port that clients (UF/HF) will be reaching out to for configuration updates.
Until now, our Deployment Server
hasn't actually been activated as a deployment server, and has essentially just been a normal Splunk enterprise instance, the way that we "activate" our deployment server is placing/creating an app within the deployment-apps
folder, shown below:
💡 Use
mkdir
combined with the-p
flag to recursively create theuf_outputs_ssl
directory as well as two subdirectorieslocal
andcerts
, which will hold our.conf
,.pem
, and.key
files, respectively.
$ pwd
/opt/splunk/etc/deployment-apps
$ mkdir -p uf_outputs_ssl/{local,certs}
The local
directory is mandatory for all apps
distributed by Splunk, both native and custom, and will store important config files like inputs.conf
as well as server.conf
, to name a few.
Now that we have our uf_outputs_ssl
app created, add our existing spluf.pem
concatenated certificate as well as the ADCS PKI certificate chain to that app /certs
folder with the following commands:
$ cd /opt/splunk/etc/deployment-server/uf_outputs_ssl/certs
$ cp ../../../auth/ssl/s2s/certs/adcs.pem .
$ cp ../../../auth/ssl/s2s/certs/spluf.pem .
Now that we have our certificates placed within the uf_outputs_ssl
, we need to move onto configuring both the outputs.conf
file as well as our server.conf
file within the local
directory:
💡 While it is normally bad practice to store plaintext passwords in configuration files, Splunk Enterprise encrypts all certificate passwords from cleartext upon restart. Keep in mind this automatic encryption does not occur outside of the
$SPLUNK_HOME/etc/system/local
directory, aside from the apps we will be deploying.
$ pwd
/opt/splunk/etc/deployment-server/uf_outputs_ssl/local
$ vi outputs.conf
[tcpout]
defaultGroup = splhf_ssl_lb
autoLB = true
autoLBFrequency=60
[tcpout:splhf_ssl_lb]
disabled=0
server = <FQDN of Splunk Primary Indexer/HF>:9998,<FQDN of Splunk Secondary Indexer/HF>:9998
clientCert = C:\Program Files\SplunkUniversalForwarder\etc\apps\win_uf_outputs_ssl\certs\spluf.pem
sslPassword = <redacted>
sendCookedData = true
useClientSSLCompression = true
sslCommonNameToCheck = <FQDN of Splunk Primary Indexer/HF>,<FQDN of Splunk Secondary Indexer/HF>
sslVerifyServerCert = true
Now that we have our outputs.conf
configured, let's move onto server.conf
, which only requires a few lines of configuration:
💡 Notice how in both configuration files I have designated absolute file paths, rather than setting relative paths. While this is generally considered bad practice, in my own testing I have found that relative paths are not supported for these configurations, and the only time I got it to work was using absolute paths. This means that forwarders on different operating systems/file structures will likely need their own forwarding apps.
$ pwd
/opt/splunk/etc/deployment-server/uf_outputs_ssl/local
$ vi server.conf
[sslConfig]
sslRootCAPath = C:\Program Files\SplunkUniversalForwarder\etc\apps\uf_outputs_ssl\certs\adcs.pem
After this configuration, we can once again use the tree
command to verify all of our required files are in their correct places:
$ tree /opt/splunk/etc/deployment-apps/uf_outputs_ssl
uf_outputs_ssl
├─ local
├─── server.conf
├─── outputs.conf
├─ certs
├─── spluf.pem # concatenated certificate for UF
├─── adcs.pem # subordinate CA public certificate
In a similar manner to our Universal Forwarder
app uf_outputs_ssl
, we need to create another app within deployment-apps
for the inputs for each of our Heavy Forwarders
:
💣 Ensure that you perform each of the following steps in accordance to the number of heavy forwarders you have, I will be repeating it twice for hf01 and hf02, but you may only have one.
$ pwd
/opt/splunk/etc/deployment-apps
$ mkdir -p hf01_inputs_ssl/{local,certs}
Now that we have created our app designated for our primary heavy forwarder (iterated for however many HFs you are deploying) we need to add a inputs.conf
in order to specify what port we'd like our Splunk indexer/heavy forwarder to listen on, as well as many other configurations supporting our load-balancing/ssl focused deployment.
$ vi /opt/splunk/etc/system/local/inputs.conf
[default]
host=splhf01
#optional cleartext SSL input as backup, or for troubleshooting, disabled for now
[splunktcp:9997]
disabled=1
[splunktcp-ssl:9998]
disabled=0
[SSL]
serverCert = /opt/splunk/etc/apps/hf01_inputs_ssl/certs/splhf01.pem
sslPassword = <redacted>
requireClientCert = true
sslVersions *,-ssl2
sslCommonNameToCheck = spluf.yellowstone.local
Next we need to again add the all-important server.conf
file located within our $SPLUNK_HOME/etc/system/local
directory, along with a single configuration line:
$ vi server.conf
[sslConfig]
sslRootCAPath = /opt/splunk/etc/auth/ssl/s2s/subca_pub.pem
$ /opt/splunk/bin/splunk restart
Now that we have all of our deployment apps ready to go, we can configure existing Splunk forwarders or servers to connect to our deployment server for updates with the following command:
$ pwd
/opt/splunk/bin
$ ./splunk set deploy-poll <DN of deployment server>:8089
**enter splunk credentials**
If you are unsure about how to create server classes and or add clients/apps to these classes, here's a link to a decent article by Splunk that outlines the general functionality of forwarder management on Splunk Web.
- Verification of a successful configuration can be done easily for SSL transmission, Splunk has created a helpful Validate your Configuration article that is purpose built for this exact scenario.