Robotic AI demos - terrytaylorbonn/auxdrone GitHub Wiki

26.0215 [email protected], linkedin.com/in/terry-taylor-biz, [Lab notes (Gdrive), Git


26.0215 Demos summary

The following describes in detail the current robotics AI docxs on the gdrive. The robotics AI demos started in Jan 2026. GPT determined the exact demos and the series content. Things were pretty chaotic at the beginning. I slowly started to reorg things. There are a total of about 70 demos.


#510 docx (overview and planning)

image



#522, #524, #525 final set-3 (reorg'd again, but demo details content is still only in set-2)

  • #522 Explicit state
  • #524 NN helps explicit state
  • #524 Latent state replaces explicit
image image




Previous docx's

#500 - #503 set-1 of initial docs (chaotic)

image image

#511 - #515 set-2 (reorg)

image image


26.0213

image

See also ZAI 2.3 TF vision (JEPA)

OLD

The first chapter "The goal of these demos" in docx #503_3_robotics_AI_JEPA_ describes the main concept that robotic AI is basically 2 things

  • "traditional" AI such as LLMs, CNNs, ViTs. GPU-based.
  • Algorithms (CPU-based) such as Kalman that make it possible for binary-based robotic systems to interact with the real world gracefully. Early robotic systems relied on forgiving physical dynamics (air, inertia) to absorb error, but close-range robots and self-driving systems operate in unforgiving environments where errors cannot be tolerated, making uncertainty-aware belief maintenance a core requirement rather than an optimization.

These docxs (Gdrive) are my main focus for now (26.0128). Doing a series of robotic AI hands-on demos with GPT leading the way. These demos focus on the python / NN aspects and avoid using any real physical robot parts (for now; no camera, no arms, no wheels, etc).

image image image

Old page

⚠️ **GitHub.com Fallback** ⚠️