AI stacks - terrytaylorbonn/auxdrone GitHub Wiki

25.0612 (0418) Doc URLs Stack URLs Lab notes (Gdrive) Git


This part of the wiki has the following workflow:

  1. Youtube demos from my favorite bloggers (listed in chapter "Sources (authors)" in docx #499_ai_stack_demo_master_list_) to keep up with the latest.
  2. Demo deployments (my focus is currently more on local models that run without an internet connection).
  3. "Hackathons" or "GPT sprints" where I use only GPT (Copilot) to build AI stacks.
  4. DEEP DIVE. Right now its kind of empty, but the idea is to deep dive (continuing from where the hackathons left off) into the 5 main areas shown in the diagram below.
    image
  5. AI stack documentation.

OVERVIEW


DEMOS (YOUTUBE)

Getting the initial hands-on experience by doing demos.


3 Hackathons (ChatGPT "sprints")

GPT sprints (see #405_openwebui_llama.cpp_mistral-7b-local_GPT_ as an example) are when I use only GPT (Copilot) for testing out various AI stack configurations (that I later explore in more details in the next section "4 DEEP DIVE").


4 DEEP DIVE 25.0610 (see diagrams below)

Use ChatGPT as much as possible to deep-dive into the main functional areas.

  • 4a APIs. Use to connect the agent to the LLM. Explore select framework APIs (go through the official docs for HuggingFace, Pydantic, OpenAI, Mistral, etc, using GPT to get the missing details).
  • 4b Agents. The main binary program loop.
  • 4x Human input (??). HUMAN IN THE LOOP. All the different methods to input real intelligence.
  • 4y External data input (??). MCP, RAG, A2A. Getting external (up to date) input.
  • 4c LLMs (1-model,2-runtime,3-frontend). The core of what makes AI "intelligent": The dumb tokenizer and the nanny (also nanny-state) wrapper logic.

Explore basic tech for creating docuumentation for an AI stack.


DIAGRAMS

OVERVIEW

image

AGENT (4a) <> LLM API (4b)

image

DETAILED (WIP)

image


(notes)

extract content and eventually delete

AI stack FE/BE compatibility 25.0416

⚠️ **GitHub.com Fallback** ⚠️