Part 13 AI CC - terrytaylorbonn/auxdrone GitHub Wiki
24.0705 (0625)
CC AI (+ Mavlink) (Pix6c/PX4) (2) (Gdrive).
- Create the
- AI program (that performs, for example, object recognition) and the
- C++/Python program (that, based on AI program outputs, sends flight commands (mavlink) to the FC).
- Flight test in the
- (1) simulated world (HITL, part (9)), ensuring that the Mavlink commands are safe, and then the
- (2) real world (field test).
The first 4 sections of this Part B (Gdrive folder) describe how I got a few AI routines (I tried quite a few the first time around) running on the following platform HW:
- 13.1 AI Ubuntu. Haarcascades and mediapipe gestures running on the Ubuntu (documents 2_B1.1,2,3, 23.0122-0131). I started using Ubuntu as the first test platform (instead of PI4/Nano) to avoid issues (such as camera drivers; PI4/Nano used their custom version of 18.04 Ubuntu).
- 13.2 AI PI5
- 13.3 AI PI4. Haarcascades, TF-lite, and mediapipe face meshes (document 2_B2, 23.0119-0121).
- 13.4 AI Jetson Nano. OpenCV, haarcascades (documents 2_B3_2a-1, 2_B3_3a, 23.0110-0118; 2_B3_2a describes Nvidia tutorials, for which I had trouble getting the camera other drivers setup).
- 13.5 AI STM. Covers AI implmentations on STM hardware.
Last section:
- 13.6 AI study. This section includes mainly the hands-on demos I did when I first started out studying AI.
The following diagrams shows where the CC fits in the drone physically and functionally ("flight board" = CC). The second diagram is from Intelligent Quads.
NOTES
Using AI and ML for Indoor Navigation and Position Planning using Markov... - Farhang Naderi
PX4 Autopilot - Open Source Flight Control.
https://www.youtube.com/watch?v=LufaBry92EA
1,647 views Sep 22, 2021 2021 PX4 Developer Summit Using AI and ML for Indoor Navigation and Position Planning using Markov localization Algorithm - Farhang Naderi, Eastern Mediterranean University Speakers: Farhang Naderi Indoor localization for drones is one of the challenging tasks in industry. As the indoor environment do not have access to GPS signal, other sources to provide velocity information are compulsory. In the proposed method, the drone will do localization using Machine Learning algorithm from a series of specific patterns on the ground. A camera is monuted horizontally under the drone' to capture frames on the ground in real-time and to process. Each specific pattern has been trained by ML algorithm to be recognized. The ML algorithm is processed by Google Coral TPU. The drone starts from a random place and moves over the patterns to update its initial random guess of the position. Using Markov algorithm these guesses' confidence is update over each new pattern. After 3 or 4 patterns, the drone is localized. The drone's stability is assured by using a vision sensor (Intel Realsense camera here) giving data through MAVROS to Pixhawk running PX4-Autopilot software on it. There is and additional camera that takes care of capturing the frames on the ground real time and to process. The work has lready been done amd the hardware is set up.