Remote - focusrobotics/explorer GitHub Wiki

The remote controller is a first piece of software for the robot. I might make a "Software" page to collect multiple items it that makes sense later.

Basic layout

The Raspi will be running nodejs which will serve a web page with a video window, robot status information, and controls. The controls could be forward/backward/left/right buttons or eventually a joystick.

(html/java joystick: https://github.com/bobboteck/JoyStick https://www.instructables.com/Making-a-Joystick-With-HTML-pure-JavaScript/)

This adafruit stuff looked like it might be interesting: https://learn.adafruit.com/wifi-controlled-mobile-robot/building-the-web-interface

Here's something on streaming video with nodejs: https://qvault.io/2020/07/28/hls-video-streaming-with-node-js-a-tutorial/

Picamera stuff related to streaming: https://picamera.readthedocs.io/en/latest/recipes2.html#web-streaming

Robot Control API

This http server on the robot will have a few paths:

  • /api/v0 will give access to the robot control REST API to send commands and get status
  • /video (or is there a standard path?) to get access to the mjpeg stream for the camera
  • /index.html will be the main page which will show controls (which use the API) and stream the video (which uses the video stream)
  • Anything else (including the index.html) will just be fetched if it exists. This allows extra css and javascript files, images, etc. I can also always add other html pages as appropriate.

So node will take stuff under /api/v0 and handle it in a special way. It will often turn into serial commands.

  • You could POST to /api/v0/motors to set the motor speed for each motor.
    • But low level commands like that should be time-limited so the motors turn back off if you don't get another update within a second or so.
  • You could POST to /api/v0/motion to set velocity and angle for the device
    • But you better not mix motion and motor commands because they would usually be in conflict as to how to program motor speed
  • You could GET from /api/v0/encoder to get encoder counts
  • You could GET from /api/v0/odometry to get current position in robot coordinates
  • Other sensors could be grouped under /api/v0/sensors/... or just each have their own namespace like /api/v0/sonar and /api/v0/imu and /api/v0/lidar and /api/v0/bumper
    • The right way to access any of these sensors will probably dictate how they really appear so I won't know what to really do until I look at each one in detail

Notes on live video