Robotic AI demos - terrytaylorbonn/auxdrone GitHub Wiki
26.0215 [email protected], linkedin.com/in/terry-taylor-biz, [Lab notes (Gdrive), Git
The following describes in detail the current robotics AI docxs on the gdrive. The robotics AI demos started in Jan 2026. GPT determined the exact demos and the series content. Things were pretty chaotic at the beginning. I slowly started to reorg things. There are a total of about 70 demos.
- #522 Explicit state
- #524 NN helps explicit state
- #524 Latent state replaces explicit
See also ZAI 2.3 TF vision (JEPA)
The first chapter "The goal of these demos" in docx #503_3_robotics_AI_JEPA_ describes the main concept that robotic AI is basically 2 things
- "traditional" AI such as LLMs, CNNs, ViTs. GPU-based.
- Algorithms (CPU-based) such as Kalman that make it possible for binary-based robotic systems to interact with the real world gracefully. Early robotic systems relied on forgiving physical dynamics (air, inertia) to absorb error, but close-range robots and self-driving systems operate in unforgiving environments where errors cannot be tolerated, making uncertainty-aware belief maintenance a core requirement rather than an optimization.
These docxs (Gdrive) are my main focus for now (26.0128). Doing a series of robotic AI hands-on demos with GPT leading the way. These demos focus on the python / NN aspects and avoid using any real physical robot parts (for now; no camera, no arms, no wheels, etc).