Overview of the Spacecraft Proximity Operations Testbed - Carleton-SRCL/SPOT GitHub Wiki

Experimental Test Bed

The facility at Carleton University has the following main components: a large gravity offset table and three spacecraft simulator platforms (chaser, target, and obstacle). The platforms utilize compressed air and air bearings to create a near-frictionless environment. These platforms are self-contained and completely autonomous during any experiments. An off-board computer is used to upload the guidance, navigation, and control (GNC) software, initiate experiments, transmit data from the motion capture cameras to the platforms, and download the experimental data at the end of each test.

High-Level Overview

The platforms each have an NVIDIA Jetson Xavier NX Development board from Seeed Studio. The board features a six-core NVIDIA Carmel ARM v8.2 64-bit CPU, a 384-core NVIDIA Volta GPU with 48 Tensor Cores, and 8GB of 128-bit LPDDR4x RAM, enabling it to deliver up to 21 tera operations per second (TOPS) of compute performance. It also supports a range of standard interfaces for connecting peripherals, including Ethernet, USB, HDMI, and more.

The GNC software is first designed in MATLAB-Simulink, where simulations can be performed prior to running an experiment. Following the simulations, the Simulink diagram is exported to an executable on-board the Xavier boards. Once the main diagram has been designed, the experiment can be initiated via GUI. Any instrumentation on the platform is integrated into the GNC software using custom Simulink driver blocks.

The platforms are capable of performing a multitude of different experiments such as formation flying manoeuvres, stabilization of non-cooperative targets, robotic manipulation tasks, and vision-based tracking. Several experiments have already been performed using the experimental testbed, including an experimental validation for tethered capture of a spinning space debris, iterative learning control of spacecraft proximity operations based on confidence level, and the optimal deployment of a robotic manipulator.

Floating Surface

A 3.5 m $\times$ 2.4 m granite table serves as the base upon which the chaser and target spacecraft float. The use of air bearings reduces the friction to a negligible level. Due to surface slope angles of 0.0026 and 0.0031 degrees along both the x and y directions, residual gravitational accelerations of 0.439 and 0.252 mm.s-2 perturb the dynamics of the floating platforms along the x and y directions respectively. These values were obtained by observing the free motion of the platforms and will change over time. Recalibration of the system is key to maintaining high levels of performance.

Spacecraft Simulator Platforms

All three platforms are 0.3 m $\times$ 0.3 m $\times$ 0.3 m in terms of their height, width, and depth. The platforms consist of three easily-removable levels as shown below. The battery and required electronics, the inertial measurement unit (IMU), and the on-board computer are housed on the center level. All propulsion-related hardware, including the high-pressure compressed air tank, is located on the bottom and sub-levels. The chaser platform contains the vision system on its top level. The target and chaser platforms additionally support docking hardware, including a robotic manipulator and a docking interface. A set of solar panels can be placed on any of the platforms. A sun simulator may be used to generate high contrast lighting conditions from a single source.

The propulsion system is responsible for the translation and attitude control of the platforms. Air is delivered to the eight thrusters located below the lower deck and is passed through nozzles to produce thrust. Air pressure through the nozzles is 80 psig, which allows the thruster to produce a peak thrust of 1 N. However, the actual thrust from each thruster is greatly diminished by the pressure loss through the piping, resulting in a thrust closer to 0.25 N (or less).

The opening and closing of the thrusters is controlled by normally-closed solenoid valves. Valves are opened in less then 10 ms through the application of 12 VDC. Each solenoid is driven by power MOSFETs, which are controlled using the on-board computer. Through intelligent on-off cycling of valves, it is possible to achieve any force or torque, within the feasible limits of the actuators. Conversion of a commanded force or torque into an on-off command sequence is accomplished through a control mixer.

PhaseSpace X2E Motion Tracking System

To determine the ground-truth position of the platforms during an experiment, an array of ten PhaseSpace motion capture cameras track the inertial position of light emitting diodes (LEDs), each with their own unique pulse frequency. The reported resolution of this camera system is 0.01 mm, and position measurements are made in x, y, and z directions. Both platforms have 4 LEDs, with one located at each corner of the platform’s top panel. Using these, the PhaseSpace software can derive the quaternions of the rigid body to provide a measurement of the attitude, as well as track the center of gravity positions of the spacecraft in inertial space.

Robotic Manipulator & Gripper

The robotic manipulator consists of a modified version of the pro-series robotic arms available from CrustCrawler. Some modifications to the arm were required to ensure compatibility with the spacecraft, as well as to improve the overall rigidity of the linkages. The completed manipulator consists of two linkages, followed by the electromagnetic gripper. The gripper component is attached to the distal link of the robotic manipulator. Once the gripper is inserted into the dock, the electromagnet on the target spacecraft is turned on, which ensures a rigid capture. Each joint is actuated using a Dynamixel MX-64, which are "smart" actuators. In this case, a smart actuator refers to the fact that each joint actuator has a built-in PID controller and corresponding sensors, in addition to the motor itself.

Vision Sensor Suite

The vision sensor suite is located on the top level of the chaser platform. A ZED 2 stereo camera connected to the onboard vision computer—an NVIDIA Jetson TX2 or Orin development board—via USB is used alongside vision-based algorithms. The camera is equipped with a wide-angle 8-element all-glass dual lens with optically correct distortion. The field of view is 110◦ (H) × 70◦ (V) × 120◦ (D). Videos can be recorded at a maximum resolution of 2208 × 1242 pixels per lens at a rate of 15 frames-per-second (FPS).

An ICI 9320P thermal camera provides infrared imaging capabilities to detect temperature variations and thermal signatures. This thermal camera enhances the platform’s ability to operate in low-visibility conditions and aids in target identification and tracking.

In addition to the two aforementioned camera sensors, the vision sensor suite also includes a Garmin LIDAR-Lite v3 Laser Rangefinder. This sensor is a high-performance, compact LIDAR (Light Detection and Ranging) that provides precise distance measurements. It operates with a range of up to 40 m and a resolution of 1 cm, and can thus provide reliable distance data for real-time path planning and collision avoidance.

Finally, an Intel LiDAR camera L515 will be installed on the chaser platform beginning in the Fall 2024 semester. The LiDAR provides depth measurements over an operating range of 9 m to 0.25 m with reported depth errors ranging from <14 cm to <5 cm, depending on the distance. The LiDAR is also equipped with an RGB camera and IMU.

Click here to go HOME