3b‐2 Demo roadmap - terrytaylorbonn/auxdrone GitHub Wiki

26.0404 Lab notes (Gdrive) Git

PAGE MOVED TO

https://github.com/terrytaylorbonn/auxdrone/wiki/4%E2%80%902-Agentic-AI-demos


3b-2 AI app (agentic AI) demos

For demo details see #600-2_core_AI_concepts_. I did these demos with GPT. Almost no time was spent on writing/debugging python code.

TOC

  • Types (3) of AI app demos
  • 1 Demos that clearly show the core AI app low-level mechanical concepts
    • 1.1 First basic AI app demo (D4b)
  • 2 Demos that build an AI app project from the ground up
    • 2.1 PAL v1 26.0327
    • 2.4 PAL v4 26.0328
      • 2.4b PAL v4 deployed on Render 26.0329 <<<<<<<<<<<<<<<<<<<<<
      • 2.4c PAL v5 with MongoDB 26.0331
  • 3a Demos of AI app frameworks
    • 3a.1 (FAIL) OpenWebUI + Ollama → fastest bridge from your current local stack
    • 3a.2 (TODO) Pydantic AI → closest to your “structured + deterministic” philosophy
    • 3a.3 (TODO) LangChain or LangGraph → maps almost directly to PAL v4 concepts
  • 3b Demos of AI app ("agentic AI") IDE's
    • 3b.1 (TODO) n8n

Types of AI app demos

  • 1 Demos that clearly show the core AI app low-level mechanical concepts.
    Pic below from video about agents https://youtu.be/mNhHjkf_ahM?t=468
    image
  • 2 Demos that build an AI app project from the ground up.
    Pic below about the Maven Smart System (MSS) is from https://youtu.be/Ng9IvQzCoHs?t=247
    image
  • 3a Demos of AI app frameworks (Pydantic, langx, openwebui,
  • 3b Demos of agent workflow IDE's (n8n etc)



1 Demos that clearly show the core AI app low-level mechanical concepts


1.1 First basic AI app demo (D4b)

  • 1.1.1 Substack post #71b
  • 1.1.2 Code and test results
  • 1.1.3 Code algorithm
  • 1.1.4 The key capabilities of the model that make this possible

1.1.1 Substack post #71b (26.0326)

https://ziptieai.substack.com/p/71b-a-very-basic-ai-application-demo describes my first 2026 Agentic AI demo. This is a minimal demo (code on GIT) that shows the core of what Agentic AI really is (AI applications, not AI agents).

image

1.1.2 Code and test results (docx #600-2_core_AI_concepts_)

image

Search for GPT chats "#110" (code) and "#111" (results).

image

1.1.3 Code algorithm

Summarized:

  • #1 CREATE TASK (user_input): The LLM model can interpret correctly user_input word combinations it has never seen before.
  • #2 CREATE STRUCTURED PLAN (make_valid_plan, build_initial_messages): The LLM can break up a task into atomic actions according to a JSON spec.
  • #3 VALIDATE STRUCTURED PLAN (validate_plan): The predictable structure makes it possible to create deterministic code to validate_plan (if the format of the reply was not predictable, this would be impossible).
  • #4 REPAIR INVALIDATED PLAN (build_repair_messages): The model can actually fix its own mistakes.
  • #5 EXECUTE PLAN (execute_plan). Model calls tools (usually external APIs) to execute a plan (for dangerous actions, a human must first OK this).

See #112 (in docx #600-2_core_AI_concepts_) for details.

image

1.1.4 The key capabilities of the model that make this possible

  • The model is a UFA (universal function approximator).
  • Universal means any function.
  • Approximation means it can make good guesses for inputs it was never trained on but are semanticly close to an input that it was trained on.
  • This requires a massive NN and extensive training input.
  • This simulates (to a certain (very useful) degree) reasoning, thinking, and intelligence.



2 Demos that build an AI app project from the ground up


2.1 PAL v1 26.0327

Following shows the code for the core call to the LLM API.

image image

Following shows test results.

image image


2.4 PAL v4 26.0328

Enter free form language request to run complex analysis of DB (JSON file) event history.

GPT: "Clean one-line description for PAL v4: Use this: PAL v4 converts a natural-language analysis request into a multi-step plan, executes each step deterministically on stored data, and produces a structured comparison result."

image

GPT: Where this sits in your demo ladder. This is now:

  • D4b → single-step structured interaction
  • D4c → memory usage
  • PAL v1 → storage + analysis
  • PAL v2 → retrieval
  • PAL v3 → NL → filter
  • PAL v4 → NL → plan → multi-step reasoning

This is your first real “agent-like” system, even if you don’t like the word.


2.4b PAL v4 deployed on Render 26.0329

Search #600-2_core_AI_concepts_ for "PAL_v4 DEPLOYMENT (to RENDER)". Runs with ingest and inference. Need to add UI, DB, etc.

image

2.4c PAL v5 with MongoDB 26.0331

Search #600-3_core_AI_concepts_ for "pal_v5 (with mongo) deploy".

image image



3a Demos of AI app frameworks

3a.1 (FAIL) OpenWebUI + Ollama → fastest bridge from your current local stack.

26.0329 got ollama running locally in wsl2 on SSD (on win11 on my laptop cant recognize the nvidia gpu). but failed to get openwebui (running on docker win11) to connect to wsl2 (it only saw models in win11). gave up. for details search for "No results found this reminds me of my experience with openwebui the last time.. utter chaos." in #600-2_core_AI_concepts_.

3a.2 (TODO) Pydantic AI → closest to your “structured + deterministic” philosophy

3a.3 (TODO) LangChain or LangGraph → maps almost directly to PAL v4 concepts



3b Demos of agent workflow IDE's (n8n etc)

n8n is interesting. Great tool for automating tasks... but dont like giving access to everything.

but not really my focus right now.... more interested in palantir style stuff.

#600-4 26.0402

image




⚠️ **GitHub.com Fallback** ⚠️