Arm - mcgill-robotics/rover-2025 GitHub Wiki
Arm Control
Inverse Kinematics
The purpose of inverse kinematics is to calculate the joint angles required to achieve a desired end effector position. Many robots use a velocity-based or heuristic approach to solving this problem, but our arm design allows us to do something a little simpler. Our arm, as per competition requirements, has five degrees of motion. Three joints control position on the YZ plane, one controls rotation around the Y axis, and the final joint controls the rotation of the end effector. As such, we are able to use a trigonometric approach to calculating joint angles.
For reference, we call the first joint, which controls Y-axis rotation, the waist, then the joints controlling the XY plane are called the shoulder, elbow, and wrist, in order from closest to the base to closest to the end effector, and finally the hand is the joint controlling the rotation of the end effector, as shown in Figure 1 below.
A drawing of the arm with each joint labelled with its name and direction of rotation. We take two pieces of input to the function: a desired end effector position, given as an array {x, y, z, alpha_x, alpha_y, alpha_z} where x, y, z are the cartesian coordinates and alpha_x, alpha_y, alpha_z are the XYZ Euler angles representing the hand rotation, and the current pose of the robot, given as array {waist, shoulder, elbow, wrist, hand} where each index contains its joint's angle. We define the origin as the waist position.
Were our robot capable of rotating 360 degrees at every joint, there would be four ways to reach any one position, as there is always a working reflex and non-reflex angle for both the shoulder and elbow. The shoulder is unmoving, so the first step is to generate the possible elbow positions. The method for this is fairly simple. Imagine two circles centered on the shoulder and the wrist, with radii equal to the shoulder-elbow link and elbow-wrist link respectively. These two circles typically intersect in two points (sometimes they may only intersect in 1, but that is a rarity), and these two points represent the potential elbow positions, as shown below in Figure 2.
A drawing of the shoulder, elbows, and wrist displaying the circles and intersection points. We do this through the inverseKinematicsJointAngles function. Essentially, we calculate the position of the wrist by backtracking from the known hand position across the line generated from the known angle of the hand. (Note: Euler angles do not translate to Cartesian angles, so we convert to a transformation matrix to calculate the angle.) We do a similar process for the shoulder, which is always directly above the waist at a fixed height. Then, we calculate the aforementioned circle intersections and combine these to create two arrays of joint positions, one with each elbow position.
This is then passed into the manager function inverseKinematicsAngleOptions, which only serves to organize four calls of inverseKinematicsComputeJointAngles. This function takes a hand, wrist, and elbow position as well as a parameter indicating whether the waist angle should be rotated 180 degrees. It contains the bulk of the algorithm, and takes advantage of the limited number of joints controlling each plane of motion.
Consider the arm as viewed from the side in Figure 1. On this supposed flat plane, only three joints control the end effector position: the shoulder, elbow, and wrist. We project our joint positions onto this plane defined by the line between the XZ coordinates of the waist and end effector. With the knowledge of the elbow position, we can calculate the angle of the shoulder joint using inverse sine. Draw an invisible line between the shoulder and wrist to form a triangle. As we know the length of the shoulder-elbow and elbow-wrist links and can calculate the length of this invisible line based on the coordinates of the shoulder and wrist, we can use cosine law to calculate the interior angle of the elbow. We repeat this process for the elbow, wrist, and hand to calculate the interior angle of the wrist.
Now consider the arm as viewed from the top, as below in Figure 3. We can calculate the appropriate waist angle using the inverse tangent of the end effector, since this joint is the only one capable of affecting this ratio.
The arm as viewed from the top, showing the waist rotation pointing to the hand. Then, we use the measured "0" of each joint to adjust the calculated angles. This is where the predetermined waist angle rotation comes into play. If the given parameter is 180, we rotate the waist angle 180 degrees and assign the shoulder angle to be the calculated angle's negative, so that we continue to point along the same line. This finalized set of angles is then passed back to inverseKinematicsAngleOptions.
Once all the angle sets are collected, they are routed into legalIKPositionPicker. Although we previously assumed all joints could rotate 360 degrees, this is not the case. This function begins with a loop that checks each angle in an angle set for whether it is physically achievable and discards the set if it is not. Typically, this leaves only one position, assuming an achievable position was possible. If it produces multiple, we choose the position that is closest to the current position, and this is the final output of the function.
TL;DR we calculate the four possible arrangements of joints using cosine law and inverse tangent, then pick whichever one obeys the physical limits of the joints.
Arm up/down tilt
The upDownTilt uses inverse kinematics to compute new angles in radians for the joints [waist, shoulder, elbow, wrist, hand] after tilting the claw up or down, but maintaining the same position in space at the base of the claw. It takes in as input joystick_input, normalized between -1 and 1 where negative inputs move down and positive inputs move up, and the current angles cur_angles as a list in radians.
This function first computes the transformation matrix associated to the current angles using the arm_kinematics.forwardKinematics function. It then converts it into a list of the current position in the form [x, y, z, X, Y, Z] using arm_kinematics.Mat2Pose. Since we want to maintain the x,y,z position, we use arm_kinematics.inverseKinematics and pass in the current position with its Y value (pitch) modified by the amount we want to tilt, determined by the joystick input and a speed modifier, and this should give us the new joint angles. If inverseKinematics determines that this position is unreachable, we halve the change increment and try again, for a total of 3 tries possible. If it still fails after 3 tries, we return the original joint angles with no change.
Arm Sim
Testing With the Arm Sim
First, ensure you have pybullet 3.2.5 and numpy 1.21.5. Then, navigate to arm_sim and run
python3 ros2_arm_sim.py
The following topics will be up and running:
/armBrushedCmd
/armBrushedFb
/armBrushlessCmd
/armBrushlessFb
/parameter_events
/rosout
Cmd Topics refer to topics where we send the messages in order to make the distinct arm joints move. I.e., the desired positions.
Fb Topics reflect the current state of the arm. Subscribing to them updates our information to allow us to perform computations to achieve future positions.
All the topics send/receive a Float32MultiArray msg with three entries, each corresponding to the position in degrees of three distinct joints.
/armBrushed has claw state, wrist rotation, and wrist pitch in that order. Note that claw state is not a position in degrees, but rather a state/switch. When it is greater than 0, the claw closes.
/armBrushless has elbow, shoulder, and waist.
To test your functions, publish the desired position(s) into the cmd topics. If you need to retrieve the current state of any joint, subscribe to the fb topics. Good luck!
Modular System: Raspberry Pi 5
Raspberry Pi IP Address (Ethernet): 192.168.1.17
Raspberry Pi Username: mgrobotics-rpi5
Raspberry Pi Password: mcgillrobotics
Raspberry Pi 5 role
We are using the Raspberry Pi to offload the work with streaming camera data and running arm control code related code off the Jetson. To run our code on the Pi, you will need to run Docker.
Why we use Docker to run ROS2:
ROS2 Humble requires running Ubuntu 22.04. Due to compatibility issues with Raspberry Pi 5 and Ubuntu 22.04, we are unable to run ROS2 Humble natively on the Raspberry Pi 5, as the Pi only supports Ubuntu 23.10 and newer releases. As of June 6th, 2025, it currently uses Ubuntu 24.04. To run ROS2, we are using Docker to run a containerized version of Ubuntu 22.04.
Installing 24.04:
Installing Ubuntu 24.04 requires using the Raspberry Pi Imager to burn an Ubuntu 24.04 Iso file onto a MicroSD card. You can refer to this guide to find the link to the Raspberry Pi and Iso image.
Docker Setup:
Docker should already be set up on the Pi, but you can refer to this guide to learn more about these steps and if you want to install Docker on your machine.
How to run ROS2 on Raspberry Pi 5:
Accessing the Pi via SSH:
We recommend following our guide on setting up an SSH connection to the Jetson. These steps are very similar to connecting with the Raspberry Pi, the key difference being the static IP address you will be connecting to, which is:
Raspberry Pi Ethernet IP Address: 192.168.1.17
The login information for the Raspberry Pi is:
Username: mgrobotics-rpi5
password: mcgillrobotics
Running Docker:
The Raspberry Pi should have our Docker Image saved locally, so to run a container, open a new terminal and navigate to the docker
directory and run:
sudo docker compose up
This will instantiate a container in your current terminal. Open a new terminal and run:
sudo docker compose exec halley bash
Now you should be in the container, in the rover25_ws
directory. Run the following to compile and source your ROS src directory:
colcon build
source install/setup.bash
In the case that there is no image, or you would like to build from scratch again, you can go to docker
directory and run:
sudo docker compose build
Just as a FYI, once you built an image there is no need to keep building it again unless you want to change the underlying environment through the Dockerfile in which case building again is necessary. Otherwise, you can just launch a container as discussed above.
Now you should be able to call and run our ROS2 nodes:)