OpenLI Tutorial 19: Adding TLS to OpenLI Part 2 - OpenLI-NZ/openli GitHub Wiki
OpenLI Tutorial 19: Adding TLS to OpenLI Part 2
You may either watch the tutorial lesson on YouTube by clicking on the image above, or you can download the slides and read them alongside the transcript provided below.
Welcome back. Today we are going to resume our lesson on how to enable TLS encryption in an OpenLI deployment, after I left you on somewhat of a cliffhanger last time.
Just in case you’ve forgotten, here’s where we left off at the end of the previous lesson.
We had configured our provisioner to enable TLS by telling it where to find the keys and certificates that it would need to establish an encrypted connection with the other components.
Unfortunately, when we restarted the provisioner, we noticed that our components were failing to complete their SSL handshakes to the provisioner, so they weren’t able to communicate with it any more.
The problem here is that, while our provisioner is now expecting communications to be encrypted, the other components are still running in unencrypted mode and are therefore unable to correctly respond to the provisioner’s TLS handshake.
In practice, what this means is that once you enable TLS on one OpenLI component, you need to enable TLS on all of them.
So to fix our errors, we’ll need to log in to the collector and mediator containers and update their configuration to enable SSL as well. This is essentially the same process as what we did on the provisioner in the last lesson (albeit with slightly different names for some of the key files).
Of course, don’t forget to restart the updated components to ensure that your new configuration is applied.
Just to make sure that everything is clear, this is how the TLS configuration options should now look in the OpenLI configuration file on your collector container.
The most important thing to note is that the TLS cert and TLS key files have “collector” in their name instead of “provisioner”, but otherwise the configuration is very similar to what you should have on your provisioner.
And here is the configuration that needs to be applied on your mediator container. Again, the certificates and keys are found in the same directory as before, with just the obvious name change for the files themselves.
Once you’ve applied those configuration changes and restarted the affected components, you should be able to return to your provisioner logs and see that the SSL handshakes are now being accepted.
This should mean now that your components are talking to each other again -- try replaying one of the pcap files from a previous exercise to confirm, if you like.
You can also repeat the experiment from the previous lesson where we ran tracepktdump on the provisioner to sniff the intercept instructions.
The time around, you should see that there are no readable plain-text attributes in the payload of the sniffed packets so we can confirm that the internal OpenLI messages are now being encrypted.
The configuration changes we have made ensure that any communications between the provisioner and the other two OpenLI components are now encrypted.
However, there are still some communication channels that are not encrypted. Firstly, the ETSI records generated by the collector are still passed on to the mediator over a standard TCP socket. I’ll explain why very shortly.
Secondly, the mediator handovers are also not encrypted using the TLS public key infrastructure. This is because you will be expected to communicate with real-world agencies over a pre-configured VPN or IPSec tunnel instead.
The establishment of this tunnel will require collaboration between you and the agency, and is a bit beyond the scope of this lesson. Different agencies may have different requirements or preferences, but the agency should be able to communicate the necessary steps to you when the time comes.
OK, let’s jump back to the first situation I mentioned -- if both the collector and mediator have TLS enabled, why isn’t the output of the collector encrypted?
The thing to bear in mind here is that the volume of data transferred from the collector to the mediator can potentially be quite high -- consider situations where you need to intercept the full IP traffic of your busiest users, or you need to perform multiple concurrent interceptions. In these situations, how many gigabits of intercepted traffic are going to be passing from your collectors to your mediator?
Now factor in the overhead of encrypting all of those intercept records, which is going to reduce the maximum rate at which you can move intercepted traffic from the collector to the mediator. If your intercept system is already operating at close to capacity, the extra overhead of encryption may be too much and the system may end up dropping intercepted packets instead because it cannot keep up.
For this reason, we’ve decided that the ability to encrypt the records produced by the OpenLI collector should be an opt-in feature.
This means that, by default, encryption of intercept records is disabled but may be explicitly enabled by setting the “etsitls” option to “yes” within both the collector and mediator configuration files. Of course, you must already have supplied keys and certificates to both components as well for the encryption to succeed.
Go ahead and enable this encryption option for both the collector and mediator right now.
Remember, both components must have the same value for the “etsitls” option, otherwise OpenLI will report an error.
When it comes time to do your real OpenLI deployment, you will have to decide whether you want the collector output to be encrypted or not.
If you are confident that your collector to mediator paths are safe from unauthorized sniffing, then you can set the “etsitls” option to “no” instead and not have to worry about the impact of encryption on your intercept performance.
Otherwise it may be better (and more acceptable to the LEAs) for you to enable encryption and accept the decreased maximum throughput. If you do choose to enable encryption of intercept records on your real deployment, I would suggest keeping a close eye on the collector logs for any messages indicating that the collector is failing to keep up.
As always, you’ll need to stop and restart both the collector and mediator to apply any changes to their configuration.
You can use tracepktdump once again to confirm whether the “etsitls” option is doing what it is supposed to do.
This time around, run your tracepktdump on the collector container set to capture all traffic on interface eth1.
Then, replay the tcpsip_voip.pcap trace file from the earlier exercises -- assuming you still have the intercept from that exercise still configured on your provisioner.
If the intercept is configured correctly, and the encryption options have been set properly on all three components, you should see output on your tracepktdump as your collector sends intercept records on to the mediator.
Here’s a sniffed intercept record that was not encrypted.
As you can see, there’s a lot of plain text information in here that could easily be used to infer what intercepts are being undertaken and who the targets are -- definitely not something that should be potentially exposed.
And here is that same intercept record with encryption enabled. The content is properly obfuscated and indistinguishable from any other TLS traffic on your network.
One additional side effect of enabling TLS on your OpenLI system is that the REST API will now also use the provided certificates to switch over to use HTTPS instead of HTTP.
This means that any communications with the REST API socket will now be encrypted, and therefore not easily readable by external parties -- which is good, because the JSON objects that we send in our requests contain some pretty sensitive information.
It also means that all our REST API requests must be made to URLs beginning with the “https://” prefix -- attempts to use regular “http” as we did in the previous exercises will now fail.
There is one very important side effect to bear in mind, though. If your provisioner is using a self-signed SSL certificate, then you may find any encrypted REST API requests that you make using curl will fail with an error message like the one shown on the slide.
This is because curl doesn’t trust your certificate, which makes sense since you didn’t get it signed by a reputable CA -- you signed it yourself (or in the case of the training lab, I signed it for you).
In the training lab, you can bypass this security check by adding a ‘-k’ option to your curl command.
This is DEFINITELY not something that you would want to do with a real OpenLI deployment, so only do this to work around this issue while you’re experimenting with the training lab environment.
Of course, with your real deployment you’ll be using a properly signed certificate anyway (right?!) so you should never run into this issue with curl.
Before we finish, I just want to reiterate that we’ve had to take a few shortcuts with TLS in the training lab to make everyone’s life a bit easier -- we’ve used self-signed certificates, the certificates were already installed on the component containers and we’ve been ignoring security warnings about the certificates when using the REST API.
These shortcuts allowed us to concentrate on the specifics of integrating TLS with OpenLI, rather than getting lost in the intricacies of trust and security.
Which is fine from an educational point of view, but I feel obligated to emphasize again that there are extra steps that you will need to take when branching out into a real OpenLI deployment.
Specifically, you should create your own certificates and have them signed by a trusted authority. These certificates can then be copied onto your OpenLI component hosts, but you must also make sure to set appropriate permissions on the certificate files to keep them secure.
Finally, pay close attention to any security warnings that are reported by an OpenLI component, or any tool that is interacting with OpenLI (such as curl). Ignoring these warnings in the training lab is OK, because it is a closed environment with no real world users -- doing the same in a real deployment could be disastrous.
That concludes the lesson on TLS. Next time we are going to be enhancing the security of our OpenLI deployment from a different angle by ensuring that only authenticated persons can send requests to the provisioner via the REST API.
We’re very close to the end now -- I’ll see you again soon for the next chapter.