Architecture - diShine-digital-agency/ai-prompt-library GitHub Wiki
Technical architecture, module reference, data formats, and project structure of the AI Prompt Library.
- Architecture Overview
- Module Reference
- Data Formats
- localStorage Keys
- Project File Structure
- Node.js Built-in Modules Used
- How viewer.html Works
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ User Interface โ
โ โ
โ CLI (bin/prompt-lib.js) HTML (viewer.html) โ
โ โโโ list, search, show โโโ Browse tab โ
โ โโโ use, copy โโโ Compose tab โ
โ โโโ compose โโโ Create tab โ
โ โโโ create โโโ Generate tab โ
โ โโโ generate โโโ Tools tab โ
โ โโโ lint โ โโโ Linter โ
โ โโโ optimize โ โโโ Optimizer โ
โ โโโ recommend โ โโโ Recommender โ
โ โโโ saved โโโ Playground tab โ
โ โโโ viewer โโโ My Library tab โ
โ โ
โ Desktop apps (desktop/) โ
โ โโโ macOS native (Swift + WebKit) โ
โ โโโ Linux native (Python + GTK + WebKitGTK) โ
โ โโโ Windows (Edge/Chrome app mode) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Core Modules โ
โ โ
โ src/index.js โ Prompt loader, persistence โ
โ src/search.js โ Scored search algorithm โ
โ src/formatter.js โ ANSI terminal formatting โ
โ src/generator.js โ Dynamic prompt generation โ
โ src/linter.js โ 14-rule prompt quality scorer โ
โ src/optimizer.js โ Content-aware prompt optimizer โ
โ src/recommender.js โ Intent-based prompt matcher โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Data Layer โ
โ โ
โ prompts/**/*.md โ Built-in prompt files โ
โ ~/.prompt-library/ โ User data directory โ
โ custom-prompts.json โ User-created prompts โ
โ saved-prompts.json โ Saved compositions โ
โ localStorage (browser) โ HTML app persistence โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
The core module that loads prompts from the filesystem and manages user data.
Exports:
| Function | Signature | Description |
|---|---|---|
loadPrompts() |
() โ Array<Prompt> |
Loads all prompts from prompts/ directory + custom prompts from ~/.prompt-library/custom-prompts.json
|
loadCustomPrompts() |
() โ Array<Prompt> |
Loads only user-created custom prompts |
saveCustomPrompt(prompt) |
(Prompt) โ Prompt |
Saves a custom prompt. Overwrites if slug exists, otherwise appends |
loadSavedCompositions() |
() โ Array<Composition> |
Loads saved compositions from ~/.prompt-library/saved-prompts.json
|
saveComposition(composition) |
(Object) โ Composition |
Appends a composition with auto-generated id (timestamp) and date (ISO string) |
findPlaceholders(text) |
(string) โ Array<string> |
Extracts unique {{placeholder}} tokens from text using regex /\{\{[\w_\-\s/]+\}\}/g
|
extractTemplate(content) |
(string) โ string|null |
Extracts content between code fences in the ## Template section |
Exported Constants:
| Name | Value | Description |
|---|---|---|
PROMPTS_DIR |
<project>/prompts/ |
Absolute path to built-in prompts |
USER_DATA_DIR |
~/.prompt-library/ |
User data directory |
USER_PROMPTS_FILE |
~/.prompt-library/custom-prompts.json |
Custom prompts file |
USER_SAVED_FILE |
~/.prompt-library/saved-prompts.json |
Saved compositions file |
Internal Functions:
-
parseFrontmatter(content)โ parses YAML frontmatter (between---markers) into a{meta, body}object. Handles bracket arrays ([tag1, tag2]). -
walkDir(dir)โ recursively finds all.mdfiles in a directory tree. -
ensureUserDir()โ creates~/.prompt-library/if it doesn't exist.
Exports:
| Function | Signature | Description |
|---|---|---|
searchPrompts(prompts, query) |
(Array, string) โ Array |
Scores and ranks prompts by relevance to query |
Scoring:
| Match Location | Points per Term |
|---|---|
| Title | 100 |
| Tag | 50 |
| Category | 30 |
| Content | 10 |
Algorithm:
- Split query into lowercase terms
- For each prompt, score each term against title, tags, category, and content
- Sum all points
- Filter out zero-score prompts
- Sort by score descending
Provides ANSI color formatting for CLI output. Respects the NO_COLOR environment variable (when set, all color codes are empty strings).
Exports:
| Function | Description |
|---|---|
formatPromptList(prompts) |
Formats all prompts grouped by category |
formatPromptDetail(prompt) |
Formats a single prompt with full metadata |
formatCategories(prompts) |
Formats category list with counts |
formatStats(prompts) |
Formats library statistics (totals, difficulty breakdown, top tags) |
formatSearchResults(results, query) |
Formats search results with scores |
Color scheme:
- Cyan: titles, slugs, section headers
- Magenta: category names
- Yellow: metadata labels, intermediate difficulty
- Green: beginner difficulty, tags
- Red: advanced difficulty
- Dim/Gray: separators, secondary info
Exports:
| Function | Signature | Description |
|---|---|---|
getFrameworks() |
() โ Array<FrameworkInfo> |
Returns all frameworks with metadata and questions |
getFramework(key) |
(string) โ Framework|null |
Returns a single framework by key |
generatePrompt(key, answers) |
(string, Object) โ string |
Generates a complete prompt. Validates required fields and applies defaults. |
Available Frameworks:
| Key | Name | Questions |
|---|---|---|
expert-role |
Expert Role-Based | 8 |
chain-of-thought |
Chain-of-Thought | 5 |
structured-output |
Structured Output | 5 |
task-decomposition |
Task Decomposition | 4 |
guardrails |
Guardrails & Safety | 6 |
Each framework defines a generate(answers) function that produces the final prompt string from user answers.
Exports:
| Function | Signature | Description |
|---|---|---|
lintPrompt(text) |
(string) โ LintResult |
Analyzes prompt against 14 rules. Returns score, grade, passed/failed rules, suggestions, word count |
formatLintResult(result) |
(LintResult) โ string |
Formats lint result as human-readable string |
LINT_RULES |
Array<Rule> |
Array of all 14 rule objects |
LintResult shape:
{
score: 72, // 0-100
grade: 'C', // A, B, C, D, F
totalRules: 14,
passedCount: 10,
failedCount: 4,
passed: [...], // Array of passed rules
failed: [...], // Array of failed rules
suggestions: [...], // Array of suggestion strings (sorted by weight)
wordCount: 89
}Scoring: score = Math.round((earnedWeight / totalWeight) ร 100) where totalWeight = 100.
See Tools: Linter, Optimizer, Recommender for the full rule list.
Exports:
| Function | Signature | Description |
|---|---|---|
optimizePrompt(text) |
(string) โ OptimizeResult |
Offline, rule-based optimization |
optimizeWithAI(text, provider, apiKey, model) |
(string, string, string, string?) โ Promise<string> |
AI-powered rewriting |
sendToAI(prompt, systemPrompt, provider, apiKey, model) |
(string, string?, string, string, string?) โ Promise<AIResponse> |
Send prompt to AI model (Playground) |
OptimizeResult shape:
{
original: "...", // Original prompt text
optimized: "...", // Optimized prompt text
changes: [...], // Array of change descriptions
scoreBefore: 35,
scoreAfter: 88,
improvement: 53,
lint: { ... }, // Full lint result of optimized prompt
domain: "coding", // Detected domain
audience: "developers" // Detected audience (or null)
}Optimization pipeline: domain detection โ filler removal โ politeness reduction โ weak verb strengthening โ vague language replacement โ compound task decomposition โ role injection โ audience/tone detection โ constraints โ output format โ examples โ quality check.
7 Detected Domains: coding, writing, marketing, data, business, education, image.
Exports:
| Function | Signature | Description |
|---|---|---|
recommendPrompts(prompts, description) |
(Array, string) โ Array |
Scores all prompts by relevance |
buildRecommendation(prompts, description) |
(Array, string) โ Recommendation |
Builds full recommendation with combo |
Recommendation shape:
{
description: "...",
topPrompts: [...], // Top 8 matches
suggestedCombo: {
systemPrompt: { ... }, // Best matching system prompt
framework: { ... }, // Best matching framework
template: { ... } // Best matching domain template
},
systemPrompts: [...], // Top 3 system prompts
frameworks: [...], // Top 3 frameworks
templates: [...] // Top 5 templates
}8 Intent Categories: coding, writing, marketing, data, business, image, research, teaching.
{
"slug": "code-review",
"title": "Code Review Checklist",
"category": "development",
"tags": ["code-review", "quality", "checklist"],
"difficulty": "intermediate",
"models": ["claude", "gpt-4", "gemini"],
"content": "# Code Review Checklist\n\n## Template\n\n```\n...\n```",
"path": "development/code-review.md"
}Custom prompts add:
{
"fields": [
{ "name": "language", "description": "Programming language" }
],
"custom": true
}{
"id": 1712430000000,
"title": "Composed: Coding Assistant + Chain-of-Thought + Code Review",
"result": "# SYSTEM PROMPT\n\n...\n\n# REASONING FRAMEWORK\n\n...",
"layers": ["Coding Assistant", "Chain-of-Thought", "Code Review"],
"type": "composed",
"date": "2026-04-06T12:00:00.000Z"
}---
title: My Prompt Title
category: frameworks
tags: [tag1, tag2, tag3]
difficulty: intermediate
models: [claude, gpt-4, gemini]
---
# My Prompt Title
## When to Use
[description]
## Template
\```
Your prompt template with {{placeholders}} here
\```
## Tips
[expert tips]
## Common Mistakes
[pitfalls to avoid]These keys are used by the Prompt Workshop (viewer.html):
| Key | Type | Description |
|---|---|---|
pl_dark |
boolean |
Dark mode preference (true/false) |
pl_saved |
JSON array |
All saved prompts, filled templates, composed prompts, custom prompts. Items from the database are marked with source: 'database'
|
pl_sidebar_width |
number |
Sidebar width in pixels (260โ600). Persists across sessions |
api_settings |
JSON object |
API keys and model preferences for Playground and AI Optimizer. Contains provider, keys for OpenAI/Anthropic/Google, and selected models |
pg_prefill |
string |
Temporary playground prefill data. Cleared after use |
- Linter now includes prompt-type detection (image/code/system/general) with adjusted rule weights โ see INFRASTRUCTURE.md
- Optimizer adds a diff view engine for color-coded before/after comparison
-
Playground includes multi-model compare via
Promise.allSettled()with 30sAbortControllertimeouts - Create tab includes 6 starter templates pre-filling title, tags, body, and dynamic fields
- My Library has search, type filter, and sort controls
-
Accessibility: ARIA landmarks,
role="tablist", skip-to-content link,focus-visible, arrow-key tab navigation
ai-prompt-library/
โโโ bin/
โ โโโ prompt-lib.js # CLI entry point (ESM, #!/usr/bin/env node)
โโโ src/
โ โโโ index.js # Prompt loader, persistence, placeholders
โ โโโ search.js # Scored search algorithm
โ โโโ formatter.js # ANSI terminal formatting
โ โโโ generator.js # 5 frameworks, dynamic prompt generation
โ โโโ linter.js # 14-rule quality scorer
โ โโโ optimizer.js # Content-aware optimizer + AI APIs
โ โโโ recommender.js # Intent-based prompt matcher
โโโ prompts/
โ โโโ business/ # 12 business templates
โ โโโ data/ # 10 data analysis templates
โ โโโ development/ # 13 development templates
โ โโโ frameworks/ # 12 prompting framework guides
โ โโโ image-generation/ # 8 image generation templates
โ โโโ marketing/ # 11 marketing templates
โ โโโ model-specific/ # 6 model-specific guides
โ โโโ system-prompts/ # 10 system prompts
โโโ desktop/
โ โโโ build-all.sh # Build all platforms
โ โโโ build-macos.sh # macOS build script
โ โโโ build-linux.sh # Linux build script
โ โโโ build-windows.bat # Windows build script
โ โโโ macos-native/ # Swift source for macOS native app
โ โโโ linux-native/ # Python + GTK source for Linux native app
โ โโโ icons/ # App icons for all platforms
โ โโโ images/ # Screenshots and documentation images
โโโ test/
โ โโโ run.js # Test suite (46 tests, zero dependencies)
โโโ wiki/
โ โโโ en/ # English wiki pages
โ โโโ fr/ # French wiki pages
โ โโโ it/ # Italian wiki pages
โโโ viewer.html # Prompt Workshop (self-contained SPA)
โโโ package.json # Package config (zero dependencies)
โโโ README.md # Project overview
โโโ GUIDE.md # User guide
โโโ TECHNICAL.md # Technical documentation
โโโ FUNCTIONS.md # Functions reference
โโโ CONTRIBUTING.md # Contribution guidelines
โโโ CHANGELOG.md # Version history
โโโ CODE_OF_CONDUCT.md # Code of conduct
โโโ SECURITY.md # Security policy
โโโ LICENSE # MIT License
The project uses zero npm dependencies. Only Node.js built-in modules:
| Module | Usage |
|---|---|
fs |
File reading (readFileSync), writing (writeFileSync), directory traversal (readdirSync, statSync), existence checks (existsSync), directory creation (mkdirSync) |
path |
Path manipulation (join, dirname, relative, basename) |
url |
fileURLToPath for ESM-compatible __dirname
|
readline |
Interactive CLI input (questions and multi-line prompt entry) |
child_process |
Clipboard copy via execSync (pbcopy, clip, xclip, xsel), opening browser (open, xdg-open, start) |
os |
Home directory (homedir), temp directory (tmpdir) |
The viewer.html file is a self-contained single-page application:
- No external dependencies โ pure HTML, CSS, and vanilla JavaScript in a single file
-
Prompt data embedded โ all 82+ prompts are serialized as JSON in a
<script>tag - localStorage for persistence โ custom prompts, saved compositions, favorites, API keys, and UI preferences
- Responsive design โ works on desktop, tablet, and mobile
- Dark/light mode โ togglable with preference saved to localStorage
When launched via prompt-lib viewer, the CLI:
- Reads
viewer.html - Loads all prompts (including custom prompts)
- Injects the prompt data as JSON into the HTML
- Writes the modified HTML to a temp file
- Opens the temp file in the default browser
When opened directly, the embedded prompt data (built at release time) is used.
Navigation: โ API & Playground | Desktop Apps โ