Skills - Z-M-Huang/openhive GitHub Wiki
Skills
Skills are modular, reusable procedures. They define HOW to perform specific tasks, separated from the agent identity that uses them.
This page covers skill files in detail. For the overall rule system, see Rules-Architecture. For agent identity definitions, see Subagents.
Skills vs Subagent Definitions
| Aspect | Subagent Definition (subagents/*.md) |
Skill (skills/*.md) |
|---|---|---|
| Defines | WHO -- identity, personality, perspective | HOW -- step-by-step procedure |
| Contains | Role description, skill references, behavioral traits | Ordered steps, tool usage patterns, decision criteria |
| Reusability | One file per agent | One skill usable by many agents |
| Loaded by | Converted to AI SDK tool() definition at session creation |
Discovered from disk continuously; the activated skill's markdown is loaded into the system prompt on demand |
Skill File Layout
Skill files live in .run/teams/{name}/skills/, one markdown file per skill (e.g., deploy.md, code-review.md, incident-response.md).
Who Can Use Skills
Subagents follow skills. The orchestrator delegates to a subagent, which loads the appropriate skill into its context. Orchestrators never invoke skills directly (ADR-40).
A single deploy.md skill can be referenced by multiple subagents — e.g., both the devops and release-manager subagents can follow the same deploy skill.
Skill Definition Format
Each skill file starts with a heading # Skill: {name} and contains these sections:
| Section | Purpose |
|---|---|
| Required Tools | Plugin tools the skill needs (drives plugin loading) |
| Purpose | When to use this skill |
| Steps | Ordered, specific, actionable instructions |
| Inputs | What the skill needs to start |
| Outputs | What the skill produces |
| Error Handling | Recovery actions for failure conditions |
The ## Required Tools section declares which plugin tools a skill needs. Only tools listed here are candidates for loading into the session's activeTools.
Final exposure is the intersection of:
- The skill's
## Required Toolsentries - The team's
allowed_toolsrules, matched against the namespaced runtime key{team_name}.{tool_name}
Generic tasks (no active skill) still receive the normal non-plugin tool set permitted by allowed_tools; they simply omit plugin tools.
Ad-hoc skill and tool loading:
- Skills are loaded ad-hoc; inactive skills stay on disk and are not appended to the prompt
- When a skill is activated, its
## Required Toolssection drives plugin tool loading - Plugin source files exist on disk regardless of skill references; lifecycle state and verification results persist separately in the SQLite
plugin_toolstable - Deprecated or failed-verification tools remain on disk but are excluded from
activeTools
Tool declaration format: The Required Tools section lists one tool per line: {tool_name} — {description}. Each tool name maps to a TypeScript file at .run/teams/{name}/plugins/{tool_name}.ts and a row in the SQLite plugin_tools table.
Load behavior:
- Skills without
## Required Toolsuse built-in tools only - Tools are loaded lazily when a skill is activated
- Tool names are namespaced:
{team_name}.{tool_name} allowed_toolsmust allow the namespaced runtime key (engineering.deploy_serviceorengineering.*); baredeploy_servicedoes not match- Plugin files persist on disk; they're excluded from
activeToolsif no active skill declares them, if the allowlist does not match, or if the persisted metadata marks them deprecated/failed
Skill files remain in .run/teams/{name}/skills/. When a skill is activated, the loader appends that skill's markdown after the rule cascade; inactive skills are not injected into the prompt.
Example Skills
Deploy skill — A deployment procedure that verifies prerequisites (tests passing, PR approved, no active deploys), executes the build and deploy steps in order, verifies the deployment via health checks, and includes rollback instructions if verification fails. Post-deploy, it logs the deployment via memory and notifies the parent team.
Code review skill — A pull request review procedure covering correctness, edge cases, test coverage, security, and style adherence. Distinguishes between blocking issues and suggestions. Outcomes are approve, request changes (with specific blockers listed), or escalate (for out-of-scope changes).
System Skills
System-level skills ship in system-rules/skills/ and are baked into the Docker image alongside system rules. They provide baseline procedures that every team can use without creating team-local copies.
Current system skills:
learning-cycle.md— autonomous learning procedure (nightly discovery cycle)reflection-cycle.md— self-reflection procedure (evidence-based self-assessment)
Skill Resolution Order
When a skill is activated, the loader resolves it using a two-step fallback:
- Team path —
.run/teams/{name}/skills/{skill}.md - System path —
system-rules/skills/{skill}.md - If neither exists, the skill is not loaded (returns null)
A team can override a system skill by creating a file with the same name in its own skills/ directory. The team-level copy takes precedence.
Initial Skill Creation (Bootstrap)
When a team is first spawned, system skills are already available via the fallback resolution described above. The team can also create its own skills during the bootstrap process:
- The auto-queued bootstrap task instructs the team to read its init context and credentials
- Based on its purpose and scope, the team creates plugin tools, skill files, and subagent definitions:
plugins/loggly_fetch.ts— plugin tool for querying the Loggly APIplugins/classify_entries.ts— plugin tool for anomaly detectionskills/get-loggly-log.md— skill wiring the loggly_fetch pluginskills/alert-check.md— skill wiring both plugins for monitoringsubagents/loggly-monitor.md— subagent that follows the above skills
- Skills reference plugins via
## Required Tools. Subagents reference skills in their## Skillssection.
During bootstrap, teams can also use search_skill_repository to search the Vercel skills ecosystem for relevant patterns, using existing skills as starting points rather than generating everything from scratch. See Skill-Repository.
Skill + Plugin Workflow
Skills and plugin tools serve complementary roles:
| Layer | Role | Examples |
|---|---|---|
| Plugin tools | Executable logic — API calls, data parsing, transformations | fetch_logs, classify_entries, parse_metrics |
| Skills | Orchestration — wire plugins together, interpret output, decide next actions | monitoring-check.md, incident-response.md |
Plugin tools do the mechanical work. Skills provide the judgment and sequencing that ties the mechanical work into a coherent procedure.
4-Step Creation Workflow
Plugin-first invariant (ADR-39): Every external operation must be a registered plugin tool before a skill can reference it. Skills are orchestration only.
- Search skills.sh — call
search_skill_repositoryto find existing skills matching the desired capability. Use matches >=60% as starting points. - Extract/create plugins — identify the executable operations the skill needs (API calls, parsers, classifiers). Register each as a plugin tool via
register_plugin_tool({ tool_name, source_code }). - Create skill — write the skill markdown to
skills/{name}.md, listing the plugin tools in## Required Toolsand defining the orchestration steps. - Wire to subagent — add the skill to the target subagent's
## Skillssection insubagents/{name}.md.
Example: Monitoring Skill
A monitoring skill declares its required plugin tools (e.g., log fetcher and entry classifier), then orchestrates them in sequence — fetching logs, checking for the empty case (early exit), classifying entries, escalating critical findings, and saving results to memory. The skill provides the judgment and sequencing; the plugins handle the mechanical API calls.
Skill Evolution
Skills evolve independently from agent definitions. Changing a skill's steps does not require modifying any subagent that references it. This separation means:
- A single skill improvement benefits all agents that use it
- New agents can be created by combining existing skills
- Skills can be versioned and reviewed through the same governance process as other rules
Skill Import Flow
When creating skills, agents search the Vercel skills ecosystem (skills.sh) for existing matches. Skills scoring ≥60% are downloaded and tailored to OpenHive format; below 60%, skills are created from scratch. For the full adoption flow, match decision tree, trust signals, tool generation, and the search_skill_repository tool, see Skill-Repository#Skill Adoption Flow.
Performance Guidance
Early-Exit Monitoring
Monitoring skills that run on schedule triggers should implement early-exit when no actionable data is found. When the external source returns zero warnings or errors, the skill should report a clean status without performing full LLM analysis — reducing execution time from ~72s to <5s for clean checks. Self-reflection cycles (see Self-Evolution#Self-Reflection) can detect consecutive clean checks and propose this optimization.
Parallel Query Patterns
When a skill needs multiple independent data sources, prefer parallel queries over sequential chains:
memory_search+vault_getcan run concurrently (different stores)- Multiple
query_teamcalls for independent data should be issued simultaneously (e.g., "Query financial-news and stock-data simultaneously for independent data") list_completed_tasks+memory_listare independent reads
Include explicit parallel instructions in skill definitions when the data sources have no dependency between them.