API and Playground - diShine-digital-agency/ai-prompt-library GitHub Wiki
Using the AI Playground and the Prompt Library programmatically.
- AI Playground (Browser)
- Setting Up API Keys
- Supported Providers
- Token Usage Tracking
- Programmatic Usage (JavaScript)
- API Functions Reference
The AI Playground is available in the Prompt Workshop (browser/desktop only โ not in the CLI). It lets you send prompts directly to AI models and get responses, all within the browser.
- Open the Prompt Workshop (
viewer.htmlorprompt-lib viewer) - Click tab 6 (Playground) or press the
6key
- Provider selector โ choose between OpenAI, Anthropic, or Google
- Model override โ use a specific model instead of the default
- System prompt โ optional system prompt for context setting
- Prompt input โ write, paste, or load a prompt from the library
- Send โ sends the prompt to the selected provider's API
- Response display โ formatted AI response with markdown rendering
- Token tracking โ shows input/output token usage per request
- One-click copy โ copy the AI response to clipboard
- Configure your API key in Settings (โ button) โ one-time setup
- Select a provider (OpenAI, Anthropic, or Google)
- Optionally enter a system prompt
- Write or paste your prompt
- Click "Send"
- View the AI response, token usage, and model info
- Copy the response or iterate
Click the โ (gear) button in the Prompt Workshop to open API Settings.
| Field | Description |
|---|---|
| Provider | Active provider: OpenAI, Anthropic, or Google |
| OpenAI API Key | Your OpenAI API key (starts with sk-) |
| Anthropic API Key | Your Anthropic API key (starts with sk-ant-) |
| Google API Key | Your Google AI Studio API key |
| Model | Optional model override for each provider |
-
Keys are stored in
localStorageโ they never leave your browser - Keys are sent directly from your browser to the provider's API (no intermediate server)
- Keys are stored under the
api_settingslocalStorage key - To clear your keys: Settings โ delete the key fields, or clear browser localStorage
| Provider | Where to Get a Key |
|---|---|
| OpenAI | platform.openai.com/api-keys |
| Anthropic | console.anthropic.com/settings/keys |
| aistudio.google.com/apikey |
| Setting | Default |
|---|---|
| Default model | gpt-4o-mini |
| API endpoint | https://api.openai.com/v1/chat/completions |
| Auth header | Authorization: Bearer <key> |
| Temperature | 0.7 (playground), 0.3 (optimizer) |
| Max tokens | 4000 (playground), 2000 (optimizer) |
Message format:
{
"model": "gpt-4o-mini",
"messages": [
{"role": "system", "content": "System prompt here"},
{"role": "user", "content": "User prompt here"}
]
}| Setting | Default |
|---|---|
| Default model | claude-sonnet-4-20250514 |
| API endpoint | https://api.anthropic.com/v1/messages |
| Auth header | x-api-key: <key> |
| API version | 2023-06-01 |
| Temperature | 0.7 (playground), 0.3 (optimizer) |
| Max tokens | 4000 (playground), 2000 (optimizer) |
Message format:
{
"model": "claude-sonnet-4-20250514",
"system": "System prompt here",
"messages": [
{"role": "user", "content": "User prompt here"}
]
}| Setting | Default |
|---|---|
| Default model | gemini-2.0-flash |
| API endpoint | https://generativelanguage.googleapis.com/v1beta/models/<model>:generateContent |
| Auth | API key in URL query parameter |
| Temperature | 0.7 (playground), 0.3 (optimizer) |
| Max tokens | 4000 (playground), 2000 (optimizer) |
Message format:
{
"contents": [
{"parts": [{"text": "System prompt + user prompt"}]}
],
"generationConfig": {
"temperature": 0.7,
"maxOutputTokens": 4000
}
}Note: Google's API combines system and user prompts into a single content block.
The Playground tracks token usage for each request:
| Provider | Token Info Available |
|---|---|
| OpenAI | โ Input tokens, output tokens, total |
| Anthropic | โ Input tokens, output tokens |
| โ Not provided by API |
Token counts are displayed after each response, helping you monitor costs and optimize prompt length.
Send the same prompt to all configured providers simultaneously and compare responses side-by-side.
How to use:
- Configure 2+ API keys via โ Settings
- Write your prompt in the Playground
- Click "โ Compare (N models)"
- Results appear in a grid: response text, timing, token usage, copy button
Technical details:
- Uses
Promise.allSettled()โ one provider failing doesn't block others - Each request has a 30-second timeout via AbortController
- Send button is disabled during comparison
- All API calls enforce a 30-second timeout via AbortController
- API Settings modal shows a security warning about plaintext localStorage storage
- API keys are never sent anywhere except the selected provider's endpoint
The Prompt Library can be imported and used programmatically in your own JavaScript/Node.js projects.
npm install @dishine/prompt-libraryimport {
loadPrompts,
findPlaceholders,
extractTemplate,
saveCustomPrompt,
loadSavedCompositions,
saveComposition,
loadCustomPrompts
} from '@dishine/prompt-library';
// Load all 82+ prompts
const prompts = loadPrompts();
console.log(`Loaded ${prompts.length} prompts`);
// Find placeholders in a template
const template = '{{role}} should {{task}} for {{audience}}';
const placeholders = findPlaceholders(template);
// โ ['{{role}}', '{{task}}', '{{audience}}']
// Extract the template section from a prompt's content
const prompt = prompts.find(p => p.slug === 'code-review');
const tmpl = extractTemplate(prompt.content);import { searchPrompts } from '@dishine/prompt-library/src/search.js';
const prompts = loadPrompts();
const results = searchPrompts(prompts, 'chain of thought');
for (const r of results) {
console.log(`${r.slug} (score: ${r.score})`);
}import {
generatePrompt,
getFrameworks,
getFramework
} from '@dishine/prompt-library/src/generator.js';
// List available frameworks
const frameworks = getFrameworks();
frameworks.forEach(fw => {
console.log(`${fw.key}: ${fw.name} โ ${fw.description}`);
});
// Generate a prompt from a framework
const result = generatePrompt('expert-role', {
role: 'senior data analyst',
experience: '10+ years',
domain: 'financial services',
task: 'Analyze quarterly revenue trends and identify anomalies',
audience: 'executive team',
tone: 'professional',
constraints: 'Use only provided data, cite specific numbers',
output_format: 'structured markdown with tables'
});
console.log(result);import { lintPrompt, formatLintResult } from '@dishine/prompt-library/src/linter.js';
const result = lintPrompt('Write me something good about dogs');
console.log(`Score: ${result.score}/100 (Grade: ${result.grade})`);
console.log(`Passed: ${result.passedCount}/${result.totalRules}`);
// Human-readable output
console.log(formatLintResult(result));import { optimizePrompt } from '@dishine/prompt-library/src/optimizer.js';
const result = optimizePrompt('Can you help me write a blog post about AI?');
console.log(`Score: ${result.scoreBefore} โ ${result.scoreAfter}`);
console.log(`Domain: ${result.domain}`);
console.log('Changes:', result.changes);
console.log('Optimized:', result.optimized);import { buildRecommendation } from '@dishine/prompt-library/src/recommender.js';
const prompts = loadPrompts();
const rec = buildRecommendation(prompts, 'I need to build a REST API');
console.log('Suggested combo:');
if (rec.suggestedCombo.systemPrompt) {
console.log(` System: ${rec.suggestedCombo.systemPrompt.title}`);
}
if (rec.suggestedCombo.framework) {
console.log(` Framework: ${rec.suggestedCombo.framework.title}`);
}
if (rec.suggestedCombo.template) {
console.log(` Template: ${rec.suggestedCombo.template.title}`);
}Sends a prompt to an AI model and returns the response. Used by the Playground.
Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
prompt |
string | โ | The user prompt to send |
systemPrompt |
string | โ | Optional system prompt |
provider |
string | โ |
'openai', 'anthropic', or 'google'
|
apiKey |
string | โ | API key for the provider |
model |
string | โ | Model override (uses provider default if omitted) |
Returns:
{
text: "AI response text",
model: "model-name",
usage: { prompt_tokens: 50, completion_tokens: 200, total_tokens: 250 }
// usage is null for Google provider
}Sends a prompt to an AI model for professional rewriting. Returns only the optimized prompt text.
Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
text |
string | โ | The prompt to optimize |
provider |
string | โ |
'openai', 'anthropic', or 'google'
|
apiKey |
string | โ | API key |
model |
string | โ | Model override |
Returns: string โ the optimized prompt text.
The AI-powered optimizer uses a carefully crafted system prompt that instructs the model to:
- Keep the original intent and meaning
- Add structure with clear sections
- Make instructions more specific
- Add constraints and quality verification
- Remove vague language
- Return only the optimized prompt (no commentary)
Navigation: โ Tools: Linter, Optimizer, Recommender | Architecture โ