Camera Calibration: Realsense Camera Intrinsic and Extrinsic calibration with Eye in Hand mode for Franka Panda - penn-figueroa-lab/lab_wiki GitHub Wiki

Camera Calibration

Getting the ground work ready

[NOTE]: This has been run and tested for ROS-1 noetic on Ubuntu 20.04 and realsense d435 (however can work with any other D series camera of realsense) mounted on Franka Emika 3.0 or higher

The repository to be used for this purpose is going to be Repository

We are going to use the Checkerboard for calibration and not aruco markers - however if you want to try with that, please visit the repo linked above

Now the installations

  • libfranka and franka_ros - please install them from here
  • ROS wrapper for realsense - install it from the step 1 over here. After installing the realsense - do check the working of the depth stream using the code from here
roslaunch realsense2_camera demo_pointcloud.launch

The purpose of running the depth stream is to see that you get the checkerboard (or any other target you have) within the camera's field of view (will be useful in calibration later).

  • moveit 1.0 - You can install that through here
  • Panda_moveit_ctrl - It is a third party library, you can download it here
  • franka_ros_interface - It is built upon franka_ros to provide low level control, download from here
  • [Note]: It can happen that panda_moveit_config may not be generated when you created the move_it. If so clone and install from here. It has an important launch file for the purpose of interfacing with the franka.
  • calibration_toolbox - This is the repository linked above, clone it (do it from here)

Also make sure that the PC is running the real time kernel to interface with the franka

Intrinsic Calibration

Follow the instructions given in this ROS Package Few points to note regarding this

  • Follow the step 1 and 2 as it is, in step 3 do add the parameter --no-service-check with the rosrun command. Also please be careful of the print of checkerboard, the squares should be of the exact dimensions.
  • The method of moving the checkerboard is quite useful in here - we are going to utilise the same for our extrinsic calibration
  • If you fail to write the calibration file to camera_info topic, then atleast keep the generated .yaml file for later. We are going to need it.
  • The justification for doing the intrinsic calibration for realsense camera is that - the K matrix generated by calibration done by the user and the K matrix from the factory settings are slightly different (big enough to matter).

Extrinsic Calibration

  • Print out the checkerboard of the desired dimension and stick it on a rigid plane (whiteboard) and keep the target fixed
  • Now the camera mounted on the arm has to take the image of the target (checkerboard) from different poses (I tried with 24 poses). The way of collecting these poses, is very similar to the intrinsic calibration that we have done.

Collecting the required poses

  • Take the data from three different level of distance between camera and the rigid frame, now the rigid frame is fixed, so the camera (and thereby the end effector) has to be at three different levels of distance from the board - one is very near (roughly 9-12 inches), the other being medium distance (~18-24 inches) the last being farthest (~30-33inches).
  • At each particular level try to bring about skew in the camera frame about both X and Y axis(just as done in intrinsic with tilting left, right, top and bottom) and generate 9 such image at each level. You will be able to generate 9 for the farthest distance, 9 for medium but for near one, you can get only 6 as it would be in self collision for the last 3.
  • You can achieve all of the above orientations of the robot through the move_it interactive gui, or by doing so manually and please run the demo point cloud mentioned above to ensure that whole checkerboard is in camera frame everytime.
  • Finally collect the joint angles for all these positions. move_it has a rostopic -- joint_state through which you can directly get that - take the first seven joint angle values as the last two are about the gripper finger and place that in a separate file - put only the raw data.

Collecting the data

  • close the demo point cloud, and run the following
roslaunch realsense2_camera rs_camera.launch
  • for the package calibration_toolbox in scripts\calib_collector_server_node specify the
    • target: chessboard
    • calib_points_file:set the file location where you added all the points
    • image_topic: the rgb image topic of the camera
  • In the src\calibration_toolbox\calib_collector.py specify the base and effector link. For the ones in the lab, they are
    • base_frame_name = "panda_link0"
    • ee_frame_name = "panda_hand"
  • Now, launch the panda robot with the following command (after activating the FCI on the Desk). It should launch the move_it interactive motion planning gui
roslaunch panda_moveit_config franka_control.launch robot_ip:=172.16.0.2 load_gripper:=true
  • Next run the following to initialize the panda robot service
rosrun panda_moveit_ctrl panda_moveit_ctrl_server_node.py
  • Now run the following - it will bring the robot to a home position to initiate the process of collecting data
rosrun calibration_toolbox calib_collector_server_node.py
  • Now run the following in a separate terminal to start the process of data collection
rosservice call /collect_data <data_directory>

The Franka will go through all the poses and realsense will capture an image and the pose will be collected as a 4X4 homogenous matrix and will be stored in the data_directory. The image filename will be "X_img.png" and the pose would be "X_robotpose.txt"

Chessboard detection

It will be used to detect the pose of the chessboard in the camera frame In the src/calibration_toolbox/chessboard_detection.py specify the

  • chessboard_pattern
  • camera_info.yaml : put the path to the camera_info file, if it is different from the right now in the camera_info topic.
  • data_directory : where you want to save the newly generated pose file.
  • Run the following
python chessboard_detector.py

It will save the newly generated markerpose.txt in the above given folder

Solve the AX=XB to get the transform

  • Specify in the scripts\main_calib_node.py where you want to save the data in the data_directory and specify the Eye to Hand kind of calibration in the calibration_option = "EH"
  • Finally run the following to solve the AX = XB problem and generate the final calibration (transformation) as pose.yaml file in the specified directory
rosrun calibration_toolbox main_calib_node.py

Verification

In the case of a successful calibration, the checkpose value generated by AXB in the terminal would be very close - in my case, it was a homogenous matrix and it turned out that the diagonal of the rotation matrices were almost 1 and the translation elements were very close to each other. The other obvious check is that the point cloud registered with this calibration is going to be very close.

To Do

  • Intrinsic depth calibration
  • Intrinsic calibration check/verification
  • Add how it can be generalized to different robotic platforms

Contributor: Zeeshan Ul Islam