Project Planner - JoelBondoux/AtlasMind GitHub Wiki
Project Planner
The /project command decomposes a high-level goal into a DAG of subtasks and executes them autonomously.
Overview
@atlas /project Refactor the auth module to use JWT tokens
Flow:
- Planning — LLM generates a
ProjectPlanwith subtasks, dependencies, and roles - Preview — Estimated file impact is shown; approval gated if above threshold
- Execution —
TaskSchedulerruns subtasks in topological batches - Synthesis — Final report aggregates results across all subtasks
- Persistence — Run saved to Project Run History for review
Planning Phase
The Planner sends the goal + workspace context to the LLM, which returns a ProjectPlan:
interface ProjectPlan {
goal: string;
subtasks: SubTask[];
}
interface SubTask {
id: string;
title: string;
description: string;
role: string; // e.g. "architect", "tester", "backend-engineer"
skills: string[]; // required skill IDs
dependencies: string[]; // IDs of subtasks that must complete first
}
Constraints
- Maximum 30 subtasks per plan
- Cycle detection via Kahn's algorithm — cyclic edges are removed
- Each subtask gets a role that maps to an ephemeral agent (see Agents)
Preview & Approval
Before execution, AtlasMind estimates the impact:
estimatedFiles = subtaskCount × projectEstimatedFilesPerSubtask
If estimatedFiles >= projectApprovalFileThreshold (default: 12), the user must approve before execution proceeds.
The preview shows:
- Total subtask count
- Estimated files touched
- Dependency graph (visual DAG)
- Per-subtask: title, role, skills, dependencies
Execution Phase
TaskScheduler
The TaskScheduler takes the dependency DAG and:
- Performs topological sort (Kahn's algorithm) to determine execution order
- Groups independent subtasks into parallel batches
- Executes each batch with up to 5 concurrent subtasks
- Each subtask runs through the orchestrator with an ephemeral agent
Ephemeral Agents
Each subtask spawns a temporary agent with a role-specific system prompt:
| Role | Focus |
|---|---|
architect |
System design, patterns, scalability |
backend-engineer |
APIs, data layers, performance |
frontend-engineer |
UI components, accessibility |
tester |
Test authoring, coverage, edge cases |
documentation-writer |
Docs, clarity, completeness |
devops |
CI/CD, infrastructure, deployment |
data-engineer |
Data models, pipelines |
security-reviewer |
OWASP, threats, mitigations |
general-assistant |
Fallback |
Model Selection for Parallel Execution
The model router's selectModelsForParallel() allocates models across concurrent slots:
- Subscription/free models fill the first slot
- Pay-per-token models absorb overflow
- Cost is balanced across the batch
Checkpoints
Before each write operation during execution:
CheckpointManagercaptures file snapshots- If a subtask fails, files can be rolled back to the pre-subtask state
Synthesis Phase
After all subtasks complete, the orchestrator:
- Collects results from each subtask
- Sends them to the LLM for a unified synthesis report
- Reports total cost, files changed, and any failures
- Surfaces up to
projectChangedFileReferenceLimit(default: 5) clickable file references
Run History
Completed runs are saved to the Project Run History:
- Location:
project_memory/operations/(configurable viaprojectRunReportFolder) - Format: JSON with goal, plan, results, timing, and cost breakdown
- Access:
/runscommand or AtlasMind: Open Project Run Center
The Run Center webview shows:
- Run status (completed, failed, partial)
- Goal and timestamp
- Subtask breakdown with per-task status
- Total cost and token usage
- Options to re-run or inspect details
Configuration
| Setting | Default | Description |
|---|---|---|
atlasmind.projectApprovalFileThreshold |
12 |
Estimated changed-file count that triggers approval |
atlasmind.projectEstimatedFilesPerSubtask |
2 |
Heuristic multiplier for file impact estimation |
atlasmind.projectChangedFileReferenceLimit |
5 |
Max clickable file references in the summary |
atlasmind.projectRunReportFolder |
project_memory/operations |
Where run reports are saved |
atlasmind.toolApprovalMode |
ask-on-write |
Controls approval gating during execution |
Tips
- Start small — test with a focused goal before running large refactors
- Review the preview — check the dependency graph makes sense before approving
- Use
/runs— review past runs to learn what works and refine your prompts - Memory helps — the more SSOT context you have, the better the planner understands your codebase