Jetbot Camera ROS - arieldo/MorBot GitHub Wiki
In this tutorial, we are going to create a ROS node that captures images from a Jetbot camera and publishes them to a ROS topic. We'll then use a web server to visualize the published camera images in a web browser.
- JetBot environment set up Jetson Nano and Linux Setup
- ROS environment set up on your jetson , IF not Check out HowTo ROS
- Basic knowledge of ROS concepts, such as nodes, topics, and messages ,IF not Check out ROS 101
- Basic knowledge of Publisher and Subscriber in Python for ROS ,IF not Check out ROS PubSubPy
- Basic knowledge of Creating new Packages in ROS
- Basic knowledge of Python programming language
- install cv bridge :
sudo apt-get install ros-melodic-cv-bridge
- Create a new package named
camera_publish
inside your workspace directory (~/MorBot/morbot_ros/morbot_ws/src
):
cd ~/MorBot/morbot_ros/morbot_ws/src
catkin_create_pkg camera_publish rospy sensor_msgs
- Create a new directory named
scripts
inside your package directory (~/MorBot/morbot_ros/morbot_ws/src/camera_publish
):
cd ~/MorBot/morbot_ros/morbot_ws/src/camera_publish
mkdir scripts
- Create a new source file inside the
scripts
directory. For example, create a Python file namedcamera_publisher.py
:
cd scripts
touch camera_publisher.py
Make the script executable:
cd ~/MorBot/morbot_ros/morbot_ws/src/camera_publish/scripts
chmod +x camera_publisher.py
- The
camera_publisher.py
This script captures images from a Jetbot camera using the gstreamer_pipeline
function and publishes these images
to the /camera/image_raw
ROS topic.
#!/usr/bin/env python
import rospy
from sensor_msgs.msg import Image
from cv_bridge import CvBridge, CvBridgeError
import cv2
# GStreamer Pipeline function to open the camera
def gstreamer_pipeline(
sensor_id=0,
capture_width=1920,
capture_height=1080,
display_width=960,
display_height=540,
framerate=30,
flip_method=0,
):
return (
"nvarguscamerasrc sensor-id=%d ! "
"video/x-raw(memory:NVMM), width=(int)%d, height=(int)%d, framerate=(fraction)%d/1 ! "
"nvvidconv flip-method=%d ! "
"video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! "
"videoconvert ! "
"video/x-raw, format=(string)BGR ! appsink"
% (
sensor_id,
capture_width,
capture_height,
framerate,
flip_method,
display_width,
display_height,
)
)
def capture_and_publish():
# Create a publisher
pub = rospy.Publisher('/camera/image_raw', Image, queue_size=1)
# Initialize the node
rospy.init_node('camera_publisher', anonymous=True)
# Get the camera type from parameter server
camera_type = rospy.get_param('~camera_type', 'webcam')
# Initialize CvBridge
bridge = CvBridge()
# Open the camera based on the camera_type parameter
if camera_type == 'jetbot':
cap = cv2.VideoCapture(gstreamer_pipeline(flip_method=0), cv2.CAP_GSTREAMER)
elif camera_type == 'webcam':
cap = cv2.VideoCapture(1) # Change '1' as necessary to match your system's camera index
else:
rospy.logerr('Invalid camera_type. Please use either "jetbot" or "webcam".')
return
rate = rospy.Rate(50) # 50 Hz
while not rospy.is_shutdown():
# Capture frame-by-frame
ret, frame = cap.read()
# Check if a valid frame was returned
if ret:
# Convert the image to a ROS Image message
try:
msg = bridge.cv2_to_imgmsg(frame, "bgr8")
except CvBridgeError as e:
print(e)
else:
pub.publish(msg)
else:
print("Failed to capture frame")
rate.sleep()
# When everything is done, release the capture
cap.release()
cv2.destroyAllWindows()
if __name__ == '__main__':
try:
capture_and_publish()
except rospy.ROSInterruptException:
pass
- Build the package:
cd ~/MorBot/morbot_ros/morbot_ws
catkin_make
source devel/setup.bash
- Set the camera based on the camera_type parameter and Run the node:
-
jetbot
ORwebcam
rosparam set /camera_publisher/camera_type 'camera_type_value'
rosrun camera_publish camera_publisher.py
- Check the published images:
rostopic echo /camera/image_raw
Next, we'll install the web_video_server package, which allows us to view ROS topics in a web browser.
- Install the web video server:
sudo apt-get install ros-melodic-web-video-server
- Run the web video server:
- Launch a new terminal
roscore
- Launch a new terminal
rosrun web_video_server web_video_server
- Open a web browser and navigate to
http://<jetbot_ip_address>:8080/stream?topic=/camera/image_raw
to view the camera images.
- Create a new directory named
launch
inside your package directory (~/MorBot/morbot_ros/morbot_ws/src/camera_publish
):
cd ~/MorBot/morbot_ros/morbot_ws/src/camera_publish
mkdir launch
- Create a new launch file inside the
launch
directory. For example, create a launch file namedcamera_publisher.launch
:
cd launch
touch camera_publisher.launch
- The
camera_publisher.launch
This launch file launches the camera_publisher.py
node and the web_video_server
node and Set the camera based on the camera_type
parameter .
-
jetbot
ORwebcam
<launch>
<node name="camera_publisher" pkg="camera_publish" type="camera_publisher.py" output="screen">
<param name="camera_type" value="jetbot" /> <!-- or "jetbot" -->
</node>
<!-- Start the web_video_server -->
<node name="web_video_server" pkg="web_video_server" type="web_video_server" output="screen" />
</launch>
- Run the launch file:
roslaunch camera_publish camera_publisher.launch
-
Open a web browser and navigate to
http://<jetbot_ip_address>:8080/stream?topic=/camera/image_raw
to view the camera images. -
Check the published images:
rostopic echo /camera/image_raw