GPT CPLT dev concepts - terrytaylorbonn/auxdrone GitHub Wiki
25.0720 Lab notes (Gdrive), Git
See #405_openwebui_llama.cpp_mistral-7b-local_GPT_ as an example.
This wiki page describes the core concepts (from my perspective) of AI stack "hackathons" (or "sprints") using primarily ChatGPT/Copilot.
- Goals
- Workflow diagram
- Avoiding AI wild goose chases
- 0 to HERO demos, from scratch, using primarily GPT/Copilot.
- Explore agent topics like HD partitions, docker config, WIN11/WSL2, GPU config, etc.
The diagram below summarizes the workflows when you use ChatGPT, Copilot, etc to create an examples of an AI stack type. An example of each step follows the diagram.
If AI tools were truly intelligent, they would verify what they are telling you beforehand. But they don't (at least not always). For an example of typical GPT wild gooses chases, see #410.
If you look through the section shown above, you will notice a pattern.
- GPT gives specific directions
- Errors occur
- You tell GPT about the errors
- GPT searches for answers to the errors
- GPT cleverly packages the response in the style "yes, of course, thats right, you need to do this..."
- And the cycle repeats.
I say "cleverly" above because takes a while to get an feeling of how the binary machine (GPT) is programmed to take advantage of normal human interaction habits. Rather than saying "My sources of info were incomplete, and I did not verify that they are correct; thank you for testing this out, and here is my best guess for what to do next based on what info I could scrape human writers in the internet", GPT will reply "Yes, of course, you are right...". In the example from #410, it turns out that the configuration I wanted to create is not possible. GPT informed me of this conclusions only after half a day spent in a rabbit hole.
Section 5 AI stack docs (DRAFT) talks about the details. Section 5 focuses on AI docs for AI stacks, but in the future the doc focus will be AI-based docs for all types of products. -->