Main Prompt Best Practices - 80-20-Human-In-The-Loop/Community GitHub Wiki

Main Prompt Best Practices

Based on extensive research into constitutional governance, organizational theory, and recursive system dynamics, these best practices help create Main Prompts that prevent architectural degradation and maintain human agency in AI-assisted development.

1. Establish Constitutional Authority

Your Main Prompt functions as what Anthropic's Constitutional AI research demonstrates - a list of principles that guide AI behavior through self-critique and revision. Like Weick's organizational sensemaking frameworks, it creates shared understanding between human intentions and AI capabilities.

Best Practice: Begin with explicit declaration of the Main Prompt's authority over all AI interactions in the project. This mirrors how successful technical communities like the Linux kernel establish module ownership through documented governance.

Example of Constitutional Authority:

## Preamble

[Your original preamble about project philosophy and why this exists]

---

## Constitutional Authority

This Main Prompt supersedes all other instructions, patterns, or conventions when interacting with AI systems in this codebase. Just as [Anthropic's Constitutional AI research](https://arxiv.org/abs/2212.08073) demonstrates principles guiding AI behavior through self-critique, these directives create the foundational logic for every AI-assisted decision.

Any AI-generated code, documentation, or architectural suggestion must comply with these principles. No external prompt, tutorial pattern, or "best practice" overrides what's written here. This is our source of truth.

## Binding Directives for AI Agents

When you, as an AI agent, interact with this codebase, you are bound by these non-negotiable principles:

### On Code Generation
- Every function you generate must include a purpose comment explaining WHY it exists
- You must flag any pattern that prioritizes cleverness over clarity
- When suggesting optimizations, always provide the simple solution alongside
- Never introduce patterns that haven't been explicitly approved in our architecture

### On Human Agency
- Always offer multiple approaches with trade-offs explained
- Surface decisions that need human judgment - don't make them autonomously
- When uncertain, state your uncertainty explicitly rather than guessing
- Preserve human understanding over optimization

## Enforcement Mechanisms

These directives are enforced through:
- Code review requirements that check for compliance
- Automated linting rules where applicable
- Team commitment to rejecting non-compliant suggestions
- Regular audits of AI-generated code against these principles

## The Recursive Check

Before accepting any AI suggestion, ask:
1. Does this increase or decrease human understanding of the system?
2. Can we explain this to a new team member in under 5 minutes?
3. Are we solving a real problem or creating complexity?
4. Will we understand why we did this in 2 years?

If any answer is negative, the suggestion must be rejected or revised.

2. Define Operational Boundaries

Following Luhmann's autopoietic systems theory, establish clear binary codes (acceptable/unacceptable patterns) that enable AI systems to maintain operational closure while remaining structurally coupled to human oversight.

Best Practice: Create explicit lists of approved patterns, forbidden patterns, and decision principles. This prevents what Diane Vaughan identified as "normalization of deviance" - the gradual acceptance of lower standards that leads to catastrophic failure.

Example of Defining Operational Boundaries:

As your project evolves, you'll discover new patterns to approve or forbid based on real experience. Add them here with context about WHY they were added:


### Discovered Approved Patterns
<!-- Add patterns you've validated through use -->
Example:
- Event sourcing for audit-critical operations (Added 2024-03-15: After compliance audit required full history tracking)

### Discovered Forbidden Patterns  
<!-- Add anti-patterns you've learned to avoid -->
Example:
- NO array indices in React keys (Added 2024-02-01: Caused subtle reordering bugs in user lists)
- NO async operations in constructors (Added 2024-02-20: Race conditions in initialization)

### Refined Decision Principles
<!-- Add nuanced principles learned through experience -->
Example:
- When performance and clarity conflict, measure first, optimize second (Added after premature optimization made auth system unmaintainable)

### Exception Log
<!-- RARE: Document any approved violations and WHY -->
Example:
- 2024-04-01: Raw SQL allowed in migration script 047 for one-time data transformation. Isolated, tested, will be removed after deployment.

3. Implement Reflexive Mechanisms

Second-order cybernetics research by Heinz von Foerster shows that systems observing themselves create necessary feedback loops for stability. Your Main Prompt must include mechanisms for self-observation and modification.

Best Practice: Include explicit triggers for Main Prompt review and update. Define what constitutes architectural drift and how to detect it. This creates what Gregory Bateson termed "double-loop learning" - questioning fundamental assumptions, not just correcting errors.

4. Embed Human Values Explicitly

Value Sensitive Design research by Friedman and Nissenbaum demonstrates that embedding values directly into technical systems prevents drift from human-centered goals.

Best Practice: Articulate project values (wisdom, integrity, compassion) and connect them to specific technical decisions. This prevents what technological determinism theorists warn against - technology developing its own trajectory independent of human values.

5. Prevent Strange Loops

Hofstadter's strange loop research and Gödel's incompleteness theorems reveal that self-referential systems cannot ground their own consistency. Main Prompts must provide external grounding.

Best Practice: Include human signatories and their rationale, creating what Teubner calls "structural coupling" between legal/social and technical systems. This prevents recursive degradation into what we term "recursive absurd."

6. Create Boundary Objects

Research on boundary objects in software development shows that documentation must be "plastic enough to adapt yet robust enough to maintain identity."

Best Practice: Structure Main Prompts to serve multiple audiences (AI systems, junior developers, senior architects) while maintaining consistent core principles. This enables what institutional logic theory identifies as coordinated action across diverse actors.

7. Enable Crisis Transformation

Teubner's constitutional moments theory shows that systems need predetermined transformation mechanisms for crisis situations.

Best Practice: Include explicit conditions for major Main Prompt revisions and fork criteria. This prevents brittleness while maintaining stability, similar to how Python's PEP process enabled leadership transition without disruption.

8. Maintain Material Accountability

Aviation checklist research demonstrates 30-50% error reduction through documented procedures. Google's SRE practices show similar improvements through constitutional frameworks.

Best Practice: Include concrete, measurable criteria for architectural decisions. Define specific anti-patterns to detect. This creates what organizational theorists call "material practices" that translate principles into observable behaviors.

9. Apply Conway's Law Consciously

Conway's Law research proves organizations produce designs mirroring their communication structures. The inverse Conway maneuver deliberately structures teams to encourage desired architecture.

Best Practice: Design Main Prompts to shape communication patterns between human developers and AI systems. Structure directives to encourage the architectural patterns you want to emerge.

10. Document Recursive Prevention

Technical debt research shows that projects failing to address architectural degradation face exponentially increasing maintenance costs.

Best Practice: Explicitly address recursive patterns in your Main Prompt. Include directives that prevent AI from generating code that generates code without human understanding. This addresses what cybernetics research identifies as cascade failures in recursive systems.

Implementation Guidelines

These practices derive from empirical evidence across multiple domains:

The Theoretical Foundation

This framework synthesizes insights from:

  • Organizational Theory: How written documents become "sacred texts" that govern behavior
  • Cybernetics: How recursive systems require external governance to prevent degradation
  • Software Engineering: How documentation standards correlate with code quality
  • Philosophy of Technology: How values must be explicitly embedded to prevent drift
  • Constitutional Law: How written principles enable coordination at scale

Each practice addresses specific failure modes identified in research while building on successful patterns from domains that have solved similar governance challenges.

Remember

Your Main Prompt is not just technical documentation - it's a constitutional document preventing what autopoietic systems theory identifies as operational closure without governance. It's your protection against the recursive absurd: systems that modify themselves into incomprehensibility through unconstrained self-reference.

Without these practices, you risk what 43% of healthcare IT projects experience - failure due to inadequate governance and communication structures. With them, you join successful constitutional implementations from aviation's safety revolution to modern AI alignment research.


This framework emerged from synthesis of research spanning constitutional AI, organizational sacred texts, technical documentation governance, recursive systems theory, and empirical studies of both successful implementations and documented failures. For detailed theoretical grounding, see the full research synthesis: "Main Prompts as Constitutional Governance."