ROS_Projects - RicoJia/notes GitHub Wiki

========================================================================

Project Tools

========================================================================

  1. 有个专门publish cmd_vel 的工具: teleop_twist_keyboard -> make twist msgs. You need to control right on the terminal

    • rosrun teleop_twist_keyboard teleop_twist_keyboard.py
  2. Debugging tools: They link to warn, debug, fatal, etc. msgs

    • rqt_console and rqt_logger_level always open concurrently on two different terminals.
      • The order is from easy to severe, and only msgs above the level set in rqt_logger_level will show
        • rosrun rqt_console rqt_console
      • use ROS_INFO_STREAM for debugging. (like ROS_INFO_STREAM("lol"<<"hahah"); )
  3. Diagnosis tool: roswtf, which checks for environment variables, etc.

========================================================================

Image Processing

========================================================================

  1. you need to download

    sudo apt install ros-melodic-usb-cam
    sudp apt install ros-melodic-libuvc-camera
  2. To see the raw image:

    1. streaming thru /dev/video0
      rosrun usb_cam usb_cam_node _pixel_format:=yuyv             
    2. to view in rviz
      #This is your camera topic.
      rosrun image_view image_view image:=/usb_cam/image_raw      
      
      # rviz -> add image (not camera), then select topic. Camera displays image in a frame, in /tf. So you need fixed frame, and /tf
      
      #rqt_image_view, select topic. 
  3. TODO? cp udev rules for what?

    sudo cp 80-uvc-sys76.rules /etc/udev/rules.d/
    sudo udevadm control --reload
    sudo udevadm trigger
  4. roslaunch me495_image_processing something.maybe you need to remap cam name. Also you need to run the bridge for seeing the image proesssing node, so run the bridge node.

  5. ros_numpy can convert geometry_msgs.Point, Transform... to numpy data structures.

    from sensor_msgs.msg import Image
    import numpy as np
    import ros_numpy
    img_np = np.zeros((480, 640, 3), dtype=np.uint8)
    
    # Convert NumPy array to ROS Image message
    img_msg = ros_numpy.msgify(Image, img_np, encoding='rgb8')
  6. image pipeline: activate then process the image together. Applies to point clouds and stereo vision

    1. camera calibration package. Tutorals - checkerboard, package. purpose: finds the intrinsic parameters, using checkerboard and openCV?

      1. publish to /camera_info
      2. image_proc purpose: camera lens will distort image, and this will rectangularize the image
        1. subscribe to raw image and camera_info
        2. publishes:
          image_mono: A monochrome unrectified image
          image_rect: A monochrome rectified image
          image_color: A color unrectified image. These images are de-omsaiced (that is it accounts for the rgb pixel pattern of the camera)
          image_rect_color A color rectified image.
          
    2. depth_image_processing: process depth image. RGBD image each pixel has a depth info: how far light has to travel to hit something. Used to make point clouds.

    3. Sublimentary tools

      1. image transport can compress image. Republish feature is pretry good,
      2. web video server. if your robot is interacting with web, this will be helpful.
    4. OpenCV:

      1. CV bridge run this in ros
      2. open_CV apps: different algos on opencv
      3. Image geometry: convert bw pixel coords and other frames?

========================================================================

Turtlebot3 (navigation)

========================================================================

  1. Concepts

    1. give an estimated pose in the occupancy map
      1. (on Rviz, it's 2D estimate pose). THis is the map you see on Rviz.
      2. the robot uses laser scan to "see" its surroundings. This is the laser scanned map,
      3. you can use the teleop node to move the robot back and forth and see the surrounding, and the robot will estimate its pose.
      4. terminate teleop node
    2. run the navigation node:
      1. the navigation node will plan the path for you. even if an obstacle suddenly appears, it will avoid it.
    3. decipher a map: /map publishes /Nav_msgs/OccupancyGrid. 1 means occupied,0 is free,and -1 is unknown.
  2. Implementing a laser scan

    1. range should be slightly over the max.
    2. Laser scan simulation: laser scan frequency went up to 40 hz.. then the robot in Rviz doesn't move at all! A: because the previous process wasn't killed properly
  3. actionlib api

    1. you can set time out on this
      self.client.wait_for_result(timeout=rospy.Duration(5))
  4. Turtlebot3 already has it, but costmap global params need to be loaded! <rosparam file="$(find turtlebot3_navigation)/param/global_costmap_params.yaml" command="load" />

  5. Custom Global Planner: example. navfn/navfn is Djikstra

    • Reason why you can see the global plan: /move_base/NavfnROS/plan is being published
    • Need to reverse path when see this: None of the 0 first of 0 (0) points of the global plan were in the local costmap and free
  6. Basic components to launch a robot

========================================================================

Raspberry Pi

========================================================================

  1. we copy only the install folder to raspberry pi. SO MAKE SURE YOU INSTALL ALL YOUR NODES PROPERLY!!!

  2. Ros_serial

    • A driver that directly allows an Arduino, STM32, Xbee, Teensy to publish!
    • rosserial_python is the one to use rosserial, not c++, because c++ is not complete! Also, the opencr uses rosserial_python for what?
    • Play sound:
      rosrun rosserial_python serial_node.py _port:=/dev/ttyACM0 _baud:=1000000 
      rostopic pub /sound turtlebot3_msgs/Sound "value: 2"
    • you need rosserial's serial_node to be launched on turtlebot3!!. So you can play sounds on it, etc, using OpenCR
  3. Overall Guidelines

    1. all data should be visualized!!
    2. rosmaster: on our computer
    3. roslaunch : use machine tag to start nodes on the machine.
    4. local hostnames will be resolvable. using ROS_MASTER_URI=http://RicoSUPERMACHINE:11311
      • what is host name resolution?? - hostname resolution is conversion from assigned hostname to an IP address.
    5. No rviz, no rqt_plot on turtlebot!!
    6. install sudo apt install docker.io, not docker!! (a whole separate file system, for arm systems)
      • then you use ./arm catkin_make in your WORKSPACE to create separate arm_install, arm_devel, arm_build,
      • then you use the rsync command to copy arm_install to the turtlebot home.

Finally, do sudo reboot to reboot the computer to get rid of the ros processes.

  1. Connection
    1. connect to matt's office TURTLENET, pwd: NU-TURTLE

      User account password: navigation
      turtlebot will overwrite the previous code.
      
    2. how to ssh into the robot:

      • ssh -o 'HostKeyAlgorithms=ssh-rsa' 'student@turtlebot4'
      • If you are connected to the same turtlebot, you can ping each other's computer by name ("SUPERMAHCHINE, etc")
      • you only need to copy this key once. once you've done so, you can just do ssh student@turtlebot4
    3. where is roscore running?

      $export ROS_MASTER_URI=http://RicoSUPERMACHINE:11311   (you can set it to $export ROS_MASTER_URI=http://turtlebotX:11311 as well, but you need to specify "address" in launch file to be localhost)
      
      • you can't catkin compile code on turtlebot 1 cuz raspberry pi is too slow.
      • If you don't do export, ROS_MASTER_URI by default is http://localhost.11311, which will make turtlebot go to itself.
      • Do this on BOTH YOUR COMPUTER AND YOUR ROBOT!
        • OTHERWISE NODES LIKE robot_state_publisher will have trouble finding where to print.
    4. roslaunch file

      1. You can launch your launchfile either on computer, or robot. so you need to specify
        <machine name="local_from_turtlebot" address="localhost" env-loader="/home/student/install/env.sh"/>    //for launching launchfile on turtlebot
        <machine name="remote_from_pc" address="turtlebot4" env-loader="/home/student/install/env.sh"/>         // for launching on your computer
      2. define machine tag, and specify env-loader (env bash file to specify all environment variables)
      <machine name="foo" address="foo-address" env-loader="/opt/ros/fuerte/env.sh" user="someone"/>

      3.start your node like this.

      <node machine="foo" name="footalker" pkg="test_ros" type="talker.py" />
    5. CMakelists - install all your config and launch files:

      install(DIRECTORY launch/ DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}/launch)   
    6. Copy all your stuff over

      rsync -av --delete . student@turtlebotX:/home/student/ws 
      1. rsync will only copy things that are different from the last time.
      2. -a is all directories and permissions.
      3. -v is verbosity
      4. --delete: delete everything that's not present in the source
      5. rsync -av --delete arm_install/ student@turtlebotX:/home/student/install: copy arm_install on your computer to home/student/install. So you can use the same thing with other directories.
    7. cross compilation. Compiling code on Raspberry pi is slow! we need to x86-x64 to armhf

      in the basic workspace (same level as build, install), do 
      ./arm catkin_arm install
    8. If you choose to run launchfile on your host computer, you should run roscore, then run your launchfile.

    9. copying a file from one computer to another thru ssh:

      rsync -av --delete coffee_bot/controls/ pi@10.0.1.78:Desktop/controls
      pi@ip_address already directs you to the home directory of the computer. 

========================================================================

Point Cloud

========================================================================

  1. install point cloud lib
    sudo apt-get install cmake libopenblas-dev liblapack-dev libarpack-dev libarpack2-dev libsuperlu-dev

========================================================================

Closed Chain Robots

========================================================================

  1. URDF -> SDF(pybullet, sure thing);

  2. Moveit & URDF

    • URDF with floating platform
      1. In URDF, the platform is connected to the base by prismatic joints and this can be represented by joint states from PyBullet.
      2. In Moveit, we take in the goal pose, solve it with IK, but then visualization might be a problem...
  3. Commonn trick for making a universal joint, or a spherical joint rpy in rviz, you make invisible dummy links: <link name="knee_${id}" />

  4. Moveit!

    1. choose end-effector's parent link to be end effector, if the joint is too chunky to make the robot visible!

========================================================================

Motors

========================================================================

  1. Encoders
    1. test update only. - not matching using wheel_vel and wheel_pos. Reason: The odometer will just read absolute wheel position for pose. Solution: accumulate velocity.
      • 【教训】use relative values for any possible asynchronized situation - starting at a different time, etc.
    2. the wheel encoder doesn't give non-zero initial value. So you have to initalize it.

========================================================================

Project Experience

======================================================================== 2. Meeting with Matt 1. writing a few lines at a time 2. you can program without understanding the whole algorithm 3. git: make small commits and restore from your previous stages. Commit things incrementally to a different branch. - 4. docstring - helps you organize thoughts, connecting nodes. 5. naming: be consistent with the math, type out the math or 6. breaking the math separately. minus sign. 7. ask ppl for help and think about why. 9. ReadME: Pay special attention to instructions, read them more carefully.

  1. Terms
    1. semantics: it's the meaning of a command, not syntax. if you enter a command that semantically wrong, it doesnt make sense even if syntax is right.

Baxter

  1. Connect to Baxter

    1. nmcli connection up Rethink

    2. make sure you have ethernet connection to baxter

      1. go to set up wired connection in Settings -> Network -> wired settings.
      2. Profile name: Rethink, your Max address is 10.42.0.1, 255.255.255.0
      3. set it to manual, click add
      4. untick automatic launch in its settings.
      5. connect to Baxter using ethernet
      6. run $nmcli connection up Rethink
    3. To have your node running on Baxter, you must set environment variables:

      1. export ROS_MASTER_URI=http://10.42.0.2:11311
      2. export ROS_IP=10.42.0.1 so your own nodes know where to find the robot.
    4. write a bashrc file for this.

    5. Then you an run rqt_graph on this

  2. press E stop. then run script toreboot

  3. 5 - 10 min to reboot

  4. Manual mode: (zero-g ) rosrun baxter_tools enable_robot.py -e

========================================================================

Project Ideas

========================================================================

  1. sambot for ros2
  2. v4l2 driver
  3. slamtec: has package: object detection: Tiny yolo v4: 60 objects. Open veno: intel's visual surface. Movidia chips.
  4. carla: reinforcement learning environment. Coach is another one
⚠️ **GitHub.com Fallback** ⚠️