Engage On Cradlepoint - rallytac/pub GitHub Wiki
Running RTS Software On Cradlepoint Routers
Those lovely folks over at Cradlepoint make some awesome gear that plenty of our customers use; so we figured it'd be a good idea to get our stuff working on that terrific hardware.
So here goes ...
If we take a look at a box like a Cradlepoint router - especially the R1900 - we find that it has the ability to host 3rd-party code like ours. That is done (as all the cool kids are doing these days) by using Docker. Basically, software that runs on the box is wrapped in a Docker container and runs along quite happily without interfering with the rest of the router's functionality. Happily, RTS's software is quite comfortable running in a Docker container; so we're all set.
Sounds good - no? Well, the fine-print here is that you kinda have to figure out a way to work within the boundaries and requirements for Cradlepoint. So that is what this article is all about.
NOTE: We're using the R1900 as an example here because that's what we have at our disposal to work with.
Docker Images For The Binaries
First off, all of our Dockerized software can be pulled down from Docker Hub in the Rallytac repository section. Given that the R1900 is an ARM64 processor running Linux, we'll want images that have arm64
in their names. So, if you want a Rallypoint on your R1900, you'll want the rallypointd-arm64
image. If you want Engage Bridge Service, you'll want the engagebridged-arm64
image, and so on.
Each image contains the needed configuration files for the binary running inside the container.
For anyone who's dealt with Docker, you already know that Dockerized code has a particular issue when it comes to mutable data - "stuff" that can change - inside the container. Basically, if you modify items such as configuration files inside a running container you're all good until you delete/remove that container. When you do that, you lose those changes. Now we'll assume that you're not going to do that in general so you'd be OK. But, if you update your container (actually the image backing the container) - say for a bug fix, getting access to a new feature in an updated software version, and so on - your changes are lost.
There's a number of ways to work around this - a popular one being to place the mutable data (i.e. those configuration files) inside a directory on the host machine and then mapping that directory into the container(s) using Docker's -v
command-line option. But that's not an option on the Cradlepoint as you can't willy-nilly drop files onto the host - i.e. the actual Cradlepoint Linux OS.
So, we'll follow a different approach and use a Data-Only Container.
Docker Images For The Configuration
Data-Only Containers are Docker containers (that derive from an image) that don't run any meaningful software. Rather, they are meant to "contain", notionally, mutable files that are, in turn, utilized by other containers. How you configure this business varies from platform to platform but for this missive we're going to talk about how to use a data-only container on a R1900. Cool!?
OK, so let's say we want to setup a Rallypoint on a R1900. We're going to need the Rallypoint ARM64 image from above (rallypointd-arm64
). That contains the Rallypoint executable binary. It also contains the configuration files the Rallypoint uses. But we're not going to use the ones in the RP's container. Rather, we're going to use the rts-config-arm64
image for that purpose. It, just like our other Docker images, can be pulled from Docker Hub.
R1900 Container Setup
Hopefully you're already familiar with how the Cradlepoint software works regarding containers so we won't go into too much detail. (Besides, Cradlepoint has pretty good documentation already on the subject.) So we'll kinda short-circuit things a little and get right into it.
Here's a quick checklist of what you'll need:
- A Cradlepoint R1900.
- A license from Cradlepoint that gives you the ability to create containers.
- Administrative login access to the R1900.
- SSH access to the Cradlepoint.
- Connectivity to the Internet.
- Access to your Cradlepoint NetCloud account if your router is managed by NetCloud. (This is optional.)
Once you've got this in place, do the following:
- Login to the R1900.
- Navigate to
System
->Containers
->Projects
. - Click
Add
to create a new project. - In the popup window:
- Give your project a name - let's go with
rts
. - Make sure that
Enabled
is checked. - Set the
Update Interval
to0
. - Click the
Compose
tab. - Paste the following YAML into the text area:
- Give your project a name - let's go with
version: '2.4'
services:
rts-data:
image: rallytac/rts-config-arm64
network_mode: bridge
volumes:
- 'rts-volume:/etc'
- 'rts-volume:/tmp'
ports:
- '9082:9082'
environment:
- RTS_SID=ABCDEFG1234567890
- RTS_UN=admin
- RTS_UP=password
- RTS_BP=backup_password
- RTS_PORT=9082
logging:
driver: json-file
rts-rp:
image: rallytac/rallypointd-arm64
network_mode: bridge
volumes:
- 'rts-volume:/etc'
- 'rts-volume:/tmp'
ports:
- '7443:7443'
logging:
driver: json-file
volumes:
rts-volume:
driver: local
- Click
Save
.
Let's take a look at what's going on here...
At the top level, we have two services - rts-data
and rts-rp
- and a "root" volume (data storage area) named rts-volume
.
rts-data
is the Data-Only Container.- The full name for the Docker image for it is
rallytac/rts-config-arm64
which Docker uses to pull the container from Docker Hub. (If you're using a different repository, change this accordingly.) - Next,
network_mode
isbridge
. Change this to use whatever specialized container network you have setup. - Then, the
volumes
element has two entries that both "root" offrts-volume
discussed below. Here we have/etc
and/tmp
. The/etc
line contains the/etc
root inside the container - giving us access to the RTS configuration files pre-packaged in the container. The/tmp
line gives us access to the/tmp
directory inside the container. - In the
ports
section we're mapping TCP port9082
to the outside world. That port is for the little web server we've built into the container. More on this later on. - Also to do with the web server, the
environment
section passes arguments to the web server. We'll cover that stuff further down below too. - Finally,
logging
specifies that we usejson-file
as a driver. This gives us the ability to peek at the logs produced by anything inside the container (see below for the discussion on Logging)
- The full name for the Docker image for it is
rts-rp
is the container for the Rallypoint. It's full name israllytac/rallypointd-arm64
. Just like therts-data
container, it has the samevolumes
mapping andlogging
. It also has a port mapping underports
to expose the RP's TCP listening port of7443
to the outside so that Engage users can connect to it.rts-volume
undervolumes
just tells the R1900 to create a local folder for our services namedrts-volume
which is the shared between whatever services reference it.
NOTE: The port mapping discussed above for the Configuration Container's web server and the Rallypoint is NOT recommended for production. Rather, Cradlepoint's advice is to setup a specialized container/Docker network, expose the port that way, and then map it to the outside world through the R1900's network configuration mechanism. We won't get into that here as it'll just complicate things.
HINT: You'll notice we've named the services
rts-data
andrts-rp
. Ideally we'd have named them justdata
orrp
but the Cradlepoint doesn't seem to like such names - throwing it's toys out with an error complaining about a malformed top-level YAML element (whatever that means). So, just go with it and give your longer names - such asrts-some-meaningful-service-name
.
Congratulations, you've cruised through the easy part! Now let's actually use this stuff.
Using The Containers
The Sadist Method
So ... this is where thigs get a little more exciting. As you know, our software (like most others) needs to configured before it can work. Fortunately, the Rallypoint doesn't actually need any special configuration to work. Just the default configuration is good enough for it to stand up on it's hind legs and do it's thing. Regardless, let's practice a little with configuring it.
The first thing we're going to do is give our Rallypoint an ID. We'll do this inside the Data-Only Container named rts-data
. And, yay!, we're going to use a combination of ssh
and everyone's favorite text editor, vi
!!! (I bet your fingers are itching to muck with vi
... right!?)
- Open an SSH connection to your R1900 - something like
ssh [email protected]
or whatever your login credentials and/IP address is for your R1900. - Next, let's see a list of containers by entering
container list
.
[admin@R1900-8d6: /]$ container list
Project: rts
Containers: rts_rts-data_1 rts_rts-rp_1
Notice how our project (rts
) has two containers - rts_rts-data_1
and rts_rts-rp_1
. Those are the instantiations of the services from the YAML we plugged into the confguration above. See how the container name is constructed as project_service_instance
. The data service (rts-data
) is containerized as rts_rts-data_1
while the RP is containerized as rts_rts-rp_1
. When we want to get into those containers, we need to use those specific names. But ... when we want to start and stop the containers, we actually start and stop the project - rts
. (Yeah, it feels odd to Docker afficionados but it does work nicely to keep things simpler frankly.)
Alright, given that our Data-Only Container is the one we want to work with, we eant to get into it. Do this as follows:
[admin@R1900-8d6: /]$ container exec rts_rts-data_1
root@5fa54b4c5718:/#
Notice the root@5fa54b4c5718:/#
prompt being different from the [admin@R1900-8d6: /]
prompt above it. The admin prompt is the prompt at the ssh console level inside the R1900, while the root prompt tells use we're inside the container.
Running ls
at this prompt (remember we're inside a Linux container at this point), you'd see something like this:
root@5fa54b4c5718:/# ls
bin dev home media opt root sbin sys usr
boot etc lib mnt proc run srv tmp var
Next, let's go into the RP's configuration directory:
root@5fa54b4c5718:/# cd etc/rallypointd/
root@5fa54b4c5718:/etc/rallypointd# ls
peers.json rallypointd.certstore rallypointd_conf.json
... and, let's edit our RP's core configuration file:
root@5fa54b4c5718:/etc/rallypointd# vi rallypointd_conf.json
{
"id":"",
"manufacturerId":"",
"listenPort":7443,
"interfaceName":"",
"multicastInterfaceName":"",
"requireFips":false,
"allowMulticastForwarding":false,
"ioPools":-1,
.
.
.
Use your vi
expertise to edit the file, setting an ID of MY-R1900-RP
.
{
"id":"MY-R1900-RP",
"manufacturerId":"",
.
.
.
Save, exit, and you're done. Next, exit the container to return to the R1900's prompt:
root@5fa54b4c5718:/etc/rallypointd# exit
exit
rts_rts-data_1 exec done.
[admin@R1900-8d6: /]$
Next, let's restart everything to make sure our configuration has taken. First, stop it:
[admin@R1900-8d6: /]$ container stop rts
Give it a few seconds to count its fingers and toes, and then start it:
[admin@R1900-8d6: /]$ container start rts
The Nicer Method
If you're feeling apprehensive about messing with those configurations using vi
; you're not alone. That kind of stuff is not for the faint-of-heart. And, besides, you may be wondering about things like installing new certificate stores, backing up and restoring configurations, monitoring the services, and so on. [Be honest, you weren't ACTUALLY wondering about that - were you!?]
Our solution to this is to incorporate a little web server inside the Data-Only Container that gives you a somewhat nicer UI to edit your JSON configurations, upload new certificate stores, backup and restore your configurations, monitor service status, and so on. Its not much to look at right now - we're working on making it prettier - but it gets the job done.
To access the web server, fire up your browser and point it to http://<ip_address_of_your_r1900>:9082
. You should be presented with a login screen. The credentials of admin
and password
come from those environment variables in the YAML above. Specifically, RTS_UN
is the user name (admin
in the example), while RTS_UP
is the user password (password
in the example).
Once logged in you'll be presented with a big button to click on for each of the services. Try clicking on Rallypoint
and then on Core
to edit the core configuration. We think you'll like it.
Feel free to poke around, make changes, and so on. But remember to restart your project once you've made your changes. This can be done i the router's command line with container stop rts
followed after a seconds by container start rts
.
We'll update this article with screenshots and more in-depth discussion of the configuration web server but, for now, please give this thing a try and let us know what you like and don't like.
Those Environment Variables...
Here's a quick rundown of those variables:
RTS_SID
is the system ID you're assigning for your configuration container.RTS_UN
is the administrator's user name.RTS_UP
is the administrator's user password.RTS_BP
is the baseline password used to encrypt backups.RTS_PORT
is the port the web server listens on internally. Map this to the outside with theports
setting.
You're all set!!
Logging
So ... how does one see what that magic RTS software is doing inside the container? Well, the logging
instructions in the YAML above tells Docker on the R1900 to dump the logs into JSON files. Those JSON files can then be viewed as follows:
[admin@R1900-8d6: /]$ container logs rts_rts-rp_1
---------------------------------------------------------------------------------
Rallypoint version 1.239.9079 [RELEASE] for linux_arm64
Copyright (c) 2019 Rally Tactical Systems, Inc.
Build time: Sep 1 2023 @ 19:04:06
---------------------------------------------------------------------------------
2023-10-13 01:59:52.377 [1/0x7faaea7020- ] I/main: loading configuration from '/etc/rallypointd/rallypointd_conf.json'
2023-10-13 01:59:52.381 [1/0x7faaea7020- ] I/CertStore: Loading '/etc/rallypointd/rallypointd.certstore'
2023-10-13 01:59:52.432 [1/0x7faaea7020- ] I/CertStore: loaded 'rtsCA'
2023-10-13 01:59:52.432 [1/0x7faaea7020- ] I/CertStore: loaded 'rtsFactoryDefaultRpSrv'
2023-10-13 01:59:52.432 [1/0x7faaea7020- ] I/CertStore: loaded 'rtsFactoryDefaultRpSrvWs'
2023-10-13 01:59:52.432 [1/0x7faaea7020- ] I/main: unicast nic: ALL
2023-10-13 01:59:52.432 [1/0x7faaea7020- ] I/main: multicast nic: N/A
2023-10-13 01:59:52.432 [1/0x7faaea7020- ] W/main: loop detection has been disabled - this is not advised except for troubleshooting purposes
2023-10-13 01:59:52.432 [1/0x7faaea7020- ] I/Rallypoint: rallypoint 'MY-R1900-RP' listening on TCP port 7443 with 4 i/o pools
2023-10-13 01:59:52.432 [1/0x7faaea7020- ] W/Rallypoint: Engage client UDP streaming disabled - media streams will not be optimized
2023-10-13 01:59:52.433 [1/0x7faaea7020- ] I/Rallypoint: using server certificate:
subject..........: [/C=US/ST=Washington/L=Seattle/O=Rally Tactical Systems, Inc./OU=(c) 2019 Rally Tactical Systems, Inc. - For
.
.
.
.
NOTE: It appears that once you run the
container logs
instruction there is no way to exit the process such as by pressingCtrl-C
while the container is still running. The only two methods we've found so far is to either stop the container by stopping the whole project (container stop rts
for example) or by terminating the SSH connection. This may be a shortcoming in the R1900 or simply a lack of knowledge on our part here at RTS.
Adding Services
Now that you have a good idea of how this stuff works, it should be pretty straightforward to add other services as you see fit. For example, let's say you want a Rallypoint and an Engage Bridge Service in your project. Simply update your project configuration with the new goodies and you're on your way. In fact, here's a quick rundown off making a project with both an RP and EBS.
First, the Docker Compose YAML for the project (we'll keep it as rts
):
version: '2.4'
services:
rts-data:
image: rallytac/rts-config-arm64
network_mode: bridge
volumes:
- 'rts-volume:/etc'
- 'rts-volume:/tmp'
ports:
- '9082:9082'
environment:
- RTS_SID=ABCDEFG1234567890
- RTS_UN=admin
- RTS_UP=password
- RTS_BP=backup_password
- RTS_PORT=9082
logging:
driver: json-file
rts-rp:
image: rallytac/rallypointd-arm64
network_mode: bridge
volumes:
- 'rts-volume:/etc'
- 'rts-volume:/tmp'
ports:
- '7443:7443'
logging:
driver: json-file
rts-ebs:
network_mode: bridge
image: rallytac/engagebridged-arm64
volumes:
- 'rts-volume:/etc'
- 'rts-volume:/tmp'
logging:
driver: json-file
volumes:
rts-volume:
driver: local
Here's the low-down:
- YAML changes: the addition of the
rts-ebs
service. - Containers: a new instance named
rts_rts-ebs_1
will show up in the list of containers. - Configuration: EBS' configuration resides in
/etc/engagebridged
inside therts_rts-data_1
container.
For example:
[admin@R1900-8d6: /]$ container list
Project: rts
Containers: rts_rts-data_1 rts_rts-rp_1 rts_rts-ebs_1
[admin@R1900-8d6: /]$ container exec rts_rts-data_1
root@5fa54b4c5718:/# cd etc/engagebridged
root@5fa54b4c5718:/etc/engagebridged# ls
bridges.json engagebridged.certstore engagebridged_conf.json
More To Come ...
We're still kinda finding our way with all this stuff so expect things to become a little more polished and easier to use. We're working on it ...
Oh yes, given that editing JSON files can be troublesome and error-prone; we've gone the extra mile and included the popular jq
command-line tool in the Data-Only container so you can verify your JSON edits.