Initial Work - Pieman130/dtr_open GitHub Wiki

I have initially started out thinking about the blimp operation as series of states that I have captured in this state diagram

Each level, starting at the top corresponds to a higher level of autonomy. The last level labeled "Unknown" are states I am not sure we will get to or even need to think about.

State - Move Into Position

Controller

A few of the other teams are using some variation of the ESP-32 with MicroPython.

I initially looked at the ESP-32, it is low power, has onboard Bluetooth and Wifi, and has all the outputs we need (PWM, I2C, etc.). It could be a good platform, but some limitations such as no native MicroPython Bluetooth library and single threaded processing make it limited and potentially slow.

More recently I have using a RaspberryPi Zero W it also has onboard Bluetooth and Wifi, but runs a full version of linux and Python including all the major libraries (Numpy, Scipy, OpenCV, etc.) I have even gotten PyTorch to run on it fairly well. The downsides are it is larger than the ESP-32, doesn't have native PWM, and uses a lot more power, but I think we can work around all those things. Regarding the PWM, a newer library called GPIOZero is available for Raspberry Pi OS that uses a software based PWM and it seems to work just fine. It even handles motor control very well. I had to adjust a few parameters to ensure the PWM signal is stable however namely:

Changing the Pin Factory pin_factory=PiGPIOFactory()

This requires starting up the PiGPIO daemon by executing sudo pigpiod at startup which can be added to the ~/.bashrc file

Using the GPIOZero library is very straight forward and I have a test script that speeds up and slows down both the motors and servos.

Controlling the motors also requires an external H-bridge which I just used the same one GMU selected.

Flight dynamics

I have not run tests with an actual balloon yet, but I have sketched out a setup where there are two motors each attached to a servo that controls their pitch up and down. I 3-D printed the servo mount and the motor mount for the servo. STLs are here

Seen here in these two images:

I envision the motors mounted something like this front view drawing

and seen from the side

Where to basic movements like forward, back, left, and right is controlled like a tank. (forward = both motors forward, turn right = left motor forward, right motor reverse, etc.)

But to move up or down in altitude the motors would pitch up/down as seen here controlled by the position of servo.

Manual Control

To get the blimp into initial position we are allowed, and should use, some manual control. Again based on what GMU/UCLA has done I paired a PS3 controller via bluetooth with the Raspberry Pi Zero. This process is a bit trickier than it should be but I follow the process I found at this website and it works fairly well (although I have not tried it for a generic brand of PS3 controller). Note I had to use the sixad script, because I couldn't get the bluetoothctl to find the PS3 controller.

To access the PS3 controller buttons I used the pygame python library and the pygame.joystick.Joystick class. To read the button or joystick positions you can use either get_button(<button_num>) or get_axis(<axis_num>) methods of the Joystick class.

When the PS3 controller is connected via bluetooth the button mapping should be the same as seen in the below table. The values for the analog sticks are the extreme values when moving the stick in that direction. (NOTE, I noticed the button mapping changes if the controller is connected via USB.)

Motor controls

DC propeller motors require and H-bridge to function correctly and we have chosen the DRV8835 dual h-bridge based on what other schools have chosen. One board will control two DC motors.

The H-Bridge and the servos require a PWM signal to operate and the Raspberry Pi can provide that via the gpiozero library (which is preinstalled in the latest versions of Raspberry Pi OS). Although the DC motors are not that sensitive the servos require more precise timing and I have found you need to utilize an alternate timing function called, PiGPIOFactory, which is also available through the gpiozero library, but requires a few extra commands.

There is a demo of PS3 controller controlling the speed/direction for 2 motors connected via H-bridge and 2 servos connected directly to the RPi here. The demo maps the right analog stick to fwd/rev/left/right controls and L2 and R2 to UP and DOWN as described here

IMU

Other teams have discussed the use of an IMU. We should have a few on order. I have not done any work with them for this project yet. My initial thought is an IMU could help orient the blimp so there is a consistent UP/DOWN, but maybe there is more that can be done with dead reckoning to estimate absolute position?

State - Search

Once we move into position manually we will have to locate the targets. Assuming we are not lined up directly in front of them we have to look around to find them. I was thinking about a preprogrammed scan pattern, similar to a scanning radar. The blimp could move up and down and elevation while continuing this scan pattern until it locates the target from there it could move on to the next state, capture.

Locating the target

I have started to look at the use of the PiCamera as our sensor for locating the target. The most widely used library for computer vision and the PiCamera seems to be OpenCV which is a HUGE library written in C++, but it has a Python wrapper. Precompiled binaries are available for Raspberry Pi Zero using:

pip3 install opencv-python

Initially I ran into a few problems that I traced to an incompatibility issue with the numpy version installed with the latest RPi OS. I was able to solve it by upgrading numpy to a newer version by running pip3 install -U numpy.

The OpenCV library contains many algorithms for various CV tasks. The latest documentation for Python is hard to find because they 'sell' classes to teach it and seem less willing to give away Python docs, but all the docs for the latest C++ versions are here. I found I can get most of what I need from Medium posts and Stackoverflow questions. Although there are functions in OpenCV for object detection and tracking, my initial experiments trying to run those on a RPi Zero did not work out well (FPS rates << 1, etc.). I have had some initial success with color detection based on posts such as this.

Color detection is fairly lightweight. It requires the capturing of an image, building a mask based on the color values needed, and then overlaying that mask onto the captured image. My initial work is with RGB values, but there seems to a lot of support online for using HSV masks instead. Definitely more work to be done here, but so far I think color detection offers the most promise. The challenge with color detection will be selecting the correct mask values (either in HSV or RGB). If the mask is too wide it will pick up on anything that is close, if it is too narrow it will miss the target in different lighting. I believe for best results we should be defining the mask based on the lighting in the highbay of MESH where the game will be played (home field advantage?).

I have done some initial work detecting color looking at dry erase markers. Here is an example of what the mask will look like when trying to detect the color green (original image on left, mask on right).

Once we have detected the target color we can move to the Capture state.

State - Capture

This state would again utilize the PiCamera, but assuming we have already detected the target and we have it in the FOV of the camera we just need to navigate toward it.

My initial thought is that we could steer the blimp toward the target by trying to keep the target in the center of camera. We can do this by determine the centeroid of the target (essentially finding the geometric center of the detected mask). Then determine the error between the centeroid and the center of the camera as shown here.

Here is a script that can compute the error when trying to detect green objects.

PID Control

The error in X and Y coordinates (left/right and up/down) could be the control input to the autonomy algorithm to steer the blimp. This X/Y error could be the control inputs into a pair of PID controllers, one for each of the dimensions.

The PID controller would not give us distance to the target. We could write an algorithm that once the target is centered it moves forward trying to fill as much of the image with green pixels (the more green the closer we are to the target).

Capturing the target

There are lots of videos on GMU and ULCA's websites that showcase their attempts to develop capture mechanisms. This should be a good starting point for us. The only other thought I had was how do we confirm we have captured the target? Maybe some sort of range detector like this could give us confirmation we have indeed captured the target.

State - Find Goal and Navigate to goal

Once we capture the target we must move it to the goal. My thought is both of these processes would work similar to Search and Capture but with a different mask (for a different color).

April Tags

Just learned that goals will be marked with April Tags which should help with locating them. There are a few python libraries built by the April Tag Lab, but this seemed to work the best for Python3 and the RPi Zero. Installing via pip3 worked without issue. I built a quick script to test the detection functionality. Performance seems to be good for static tests as seen here:

Avoiding Obstacles and/or Opponents

I have not thought much about this, but we would be potentially competing with other teams that are trying to take the goal from us or keep us from getting to the goal, how would we deal with that?

⚠️ **GitHub.com Fallback** ⚠️