Home 4 - terrytaylorbonn/auxdrone GitHub Wiki
26.0209 Lab notes (Gdrive) Git
(PAGE REORG 26.0208) This wiki documents my work with AI (for a conceptual summary see Substack #66). About the author.
PHASE 4 (FUTURE 2027++)
-
ZAI 5 — GI (genuine intelligence). In the GPU-/CPU-based AI (simulated intelligence) world, terms like "neural network", "intelligence", "belief", "observation", etc have vastly different meanings than in the biological world (true intelligence) (see my related Substack post #64). In the future world of GI (hosted on 3d electronic substrates), the meanings of such terms will match or align closely with those in the biological world.
-
ZAI 4 — ZAI Books. Bringing it all together in one conceptual whole (see Substack #66 for the basic idea). Books/blogs/videos summarizing my take on ZAI 1-3 AI (and ZAI 5 GI). To be distilled from the docx lab notes (Gdrive).
PHASE 3 (NOW 2026): ROBOTS. The focus is robotic intelligence (RI).
-
NOTE: RI is
- similar to AI: input -> computation -> output. There is no real intelligence.
- the big difference is that 3a robotic sensors usually use functions (but sometimes also small NNs).
-
ZAI 3b — Robotic control agent demos. Acting on the intel from the sensors, the control agent interacts with the external world via actuators.
-
ZAI 3a — Robotic sensor function demos. Drones (ZAI 1) rely on forgiving physical dynamics (air, inertia) to absorb error, but close-range robots and self-driving systems operate in unforgiving environments where errors cannot be tolerated (making functionality like uncertainty-aware belief maintenance a core requirement). NOTE: These are usually functions (like Kalman shown in pic below), but sometimes also small NNs.
PHASE 2 (2024-2025): LLMs. The latest AI tools (LLMs like GPT) made MERN stack dev (ZAI 1.4) vastly more efficient. These tools were obviously the future. I shifted my focus to
-
NOTE: AI is:
- input -> computation -> output. There is no real intelligence.
- AI "sensors" (transformers) use UFAs (universal function approximators, NNs) because no pure function can describe the complex logic
-
ZAI 2b — LLM agents (that control UFA sensors (TFs) and create the simulation of intelligence):
- 2b.2 LLMs (GPT-3) (2025). See 2.2c stacks, 2.2a deployments, 2.2b LLM APIs).
- 2b.3 Image/video "LLMs" (JEPA) (2026). My LLM studies (ZAI 2.2) made me a believer in LeCun's prediction that LLMs will soon "max out". The future focus will be "world models".
-
ZAI 2a — UFA sensors: CNN's / transformers (TFs):
- 2a.1 Image CNNs (vision AI used on drones). CNN study is good prep for TFs (2024).
- 2a.2 Language TFs (GPT-3) are the core token sequence generators inside LLMs (storyline recognition for "language models") (2025).
- 2a.3 Video TFs (ViT) (2026).
PHASE 1 (2023-2024): DRONES and WEBSITE DEV
-
ZAI 1.4 — Tech (MERN) stacks. The drone (ZAI 1) market was saturated, working in Ukraine was not the best idea, and I was spending all my time on debugging open source SW and Chinese components. I shifted focus to refreshing my website dev skills. I focused mainly on MERN stacks.
-
ZAI 1 — AI Drones. The goal was to build AI (CNN object recognition) drones in Ukraine.
- 1.1 Drone simulation.
- 1.2 FPV and Pixhawk builds. EPIC 1 - Build/fly FPV drone and - EPIC 2 - Build/fly Pixhawk drone.
- 1.3 Autonomous flight (Pixhawk + Jetson Nano + CNN object recognition (on Nano/PI). For details see EPIC 3 - Add AI to Pixhawk drone, EPIC 4 – Basic Autonomy, EPIC 5 – Advanced Autonomy, 10.3 SITL AP AI Yolo obj recog, 15.1 AI Kamikadze (Kalman, PID).
The "Aux" in "Auxdrone" is part of the name of an NGO I was working with that was active in Ukraine.