AI Workflow - LucasWolke/code-tutor GitHub Wiki
AI Workflow
Workflow Definition
Code Tutor implements an LLM Workflow, as defined by Anthropics here. Unlike agents, which make autonomous decisions, a workflow is simpler. We chain together different LLM calls in predefined paths. We use three types of workflows: Prompt chaining, Routing and ~Evaluator-optimizer, which are implemented using the LangChain library.
In the Routing Workflow, the level assessor LLM decides which of the 5 helper LLMs answers to the user. This allows for separation of concerns, so we can easier test and modify the prompts, and we can use smaller models for easier answers, or give them different tools.
We also use the evaluator-optimizer workflow, one LLM call (Tutor) generates a response while another LLM evaluates if this response is suitable for the help level, and gives feedback if not. This is important to improve the response, but mostly circumvent any jailbreaking by not giving the user prompt to the evaluator.
Code Tutor Workflow
-
User Query
- The system receives the user input along with any relevant code, exercise, or conversational context.
-
Level Decision
- Based on an analysis of the query, current assistance level, and conversation context, the system selects one of the five defined help levels.
-
Delegation to Agent
- The selected agent processes the query using its tailored prompt guidelines according to the designated level.
-
Conformance Check
- Does not get the user prompt!!!
- A dedicated agent reviews the generated response to verify it complies with the minimal help principle for the chosen level;
- If the response fails the check, the process goes back to Step 3 with stricter prompting and suggestion.
-
Summarization
- Check if chat history is too long, then call summarizer llm
Tutor LLms
- Unrelated - The user is asking questions not related to programming.
- Motivational Help — Encouragement without technical content.
- Feedback Help — Validating or correcting methods without new hints.
- General Strategic Help — Broad guidance on strategies and approaches.
- Content-Oriented Strategic Help — General hints about common solutions or methods.
- Contextual Help — Problem-specific nudges without full solutions.
- Finished - All test cases pass - user gets feedback on their solution and is congratulated.
Find the exact prompts in this file.
Prompt Engineering
Based on some prompt engineering best practices the prompts use the following techniques:
- They are clear and direct
- Use XML tags/markdown format
- System prompts (e.g. You are a seasoned Coding Tutor that...)
- Chain complex prompts (Like in our workflow)
- Use examples (multi/few-shot)
Based on the principle of minimal help the tutors are instructed to adhere to these rules:
- Clear role setting: Each LLM call operates within a strict, level-specific tutoring role.
- Socratic questioning: AI encourages reflection instead of giving answers.
- Minimal guidance: Even at the highest help level, the AI only nudges rather than solving.
- Open-ended interaction: Suggestions are framed as options, not commands.
Resources on how to write effective prompts from OpenAi and Anthropic.