Oak Camera Setup - ArcturusNavigation/all_seaing_vehicle GitHub Wiki
Back: Tutorials |
---|
In addition to the ZED, which is facing forwards, we are also using two Oak-1 Lite W cameras, one facing back left and one back right, to have a larger FoV and smaller blind spots (mostly 30 degrees in the back where the EE box is, and some small blind spots right and left from the ZED). This helps both while performing tasks (harder to completely lose track of the buoys) and when localizing and creating a map with SLAM, improving the robustness of our perception system.
The cameras are connected via USB direcly to the Jetson.
ROS Integration and Params
There are some guides on how to integrate the cameras with the Jetson via USB, like Jetson Deployment and USB Deployment, but they are mostly relevant when using Oak's pipeline, which we are not (the dependencies and SWAP size configuration might still be useful/necessary). What we are mainly using to interface the cameras with ROS is Oak's DepthAI ROS Driver, whose source code can be found on Github. This is the depthai_ros_driver
package, and is installed by running sudo apt install ros-humble-depthai-ros
.
Then, we are using the provided camera.launch.py
launch file, which handles most of the configuration of the cameras, and we are launching it (once for each camera) from our own oak.launch.py
. We are providing a name for the nodes being launched (should be different for the two cameras), which is also the prefix of the names of the topics and frames/transforms being published (so that the two cameras are not publishing to the same topic or have conflicting TFs).
We are also providing a params file to change some settings of the cameras. The full list of parameters is provided in the DepthAI ROS Driver page, but it's extremely long and most of the parameters are usually not going to be changed. The main changes we are making include:
- Setting
i_mx_id
to be each camera's ID in the respective node being launched, to not have to deal with USB port assignments changing after connecting/disconnecting stuff from the Jetson or even rebooting it - Setting
i_tf_parent_frame
to be different for each camera i_nn_type
to 'none' to not run any of Luxonis's AI stuff that will slow down the Jetson and the cameras, since we are not using thati_pipeline_type
to 'rgb' to not compute depth data using the AI features (the cameras are monocular so depth data cannot be inferred directly)i_publish_tf_from_calibration
to 'false', in an attempt to not clutter the TF tree with more frames, since we are only using the optical frames (but that didn't work so we are remapping /tf and /tf_static being published from the nodes to some fake ones that get ignored, and publish the LiDAR-camera calibration and thus the optical frames ourselves)- We are also setting
i_resolution
to '1080P', which is smaller than '13MP' but maybe still provides the full FoV (that needs to be actually checked) i_set_isp_scale
is set to 'false', in an attempt to ensure the full FoV is depicted in the camera imagesi_sensor_img_orientation
can be set to either 'ROTATE_180_DEG', which is the default value and the one we are using now (counterintuitively, since the top of the camera is facing up, but I guess that's a result of the camera module placement inside the enclosure) when the USB port is at the bottom, or 'NORMAL' if the camera is upside down, and it also has some other configurations for rotating the camera 90 degrees or automatically detecting the orientation, all listed insensor_helpers.cpp
from the source code (it also has the supported resolutions for each camera module, and other sensor-related settings). This parameter changes the orientation of the resulting image and probably also the optical frame orientation wrt the link frame and the internal IMU.