AI Compass - eirenicon/Ardens GitHub Wiki

AI Compass

Overview

The AI Compass is a field tool for leaders, educators, and system stewards navigating the uncertain terrain of artificial intelligence. It supports ethically grounded, politically aware, operationally sound, and epistemologically responsible engagements with AI systems.

Rather than presenting a single ideology, it invites multi-perspectival reflection through four guiding lenses:


1. Moral Lens

Core Question: What is the right thing to do — and for whom?

This lens invites clarity about values, obligations, and unintended consequences. It focuses on preserving human dignity, protecting the vulnerable, and interrogating power.

Diagnostic Questions:

  • Who benefits from this AI system, and who bears the risk?
  • Are any essential human responsibilities being offloaded to machines?
  • Does this choice reduce or increase our collective capacity for care and justice?

Tension Point: The temptation to optimize for efficiency at the cost of empathy or relational depth.

Use Case (Ardens Style): A community group resists adoption of a predictive policing AI, citing moral hazards and historical trauma, despite pressure from funders to “modernize.”


2. Operational Lens

Core Question: Does it work? Can it fail safely?

This lens evaluates functionality, robustness, reliability, and alignment with real-world conditions.

Diagnostic Questions:

  • What assumptions does this model make about the world, users, or context?
  • Is the training data representative, and how is drift monitored?
  • Can the system fail gracefully — and who gets notified?

Tension Point: Over-trust in AI outputs as infallible, especially when black-box systems obscure operational gaps.

Use Case: A rural health network audits its diagnostic AI system and discovers it consistently misidentifies symptoms in older Indigenous patients due to training set bias. Mitigation is prioritized over expansion.


3. Political Lens

Core Question: Who controls it, and to what end?

This lens reveals the ownership, access, governance, and coercion structures surrounding AI.

Diagnostic Questions:

  • Who holds decision-making power over model design, deployment, and updates?
  • Are the terms of use transparent and contestable?
  • Does the system reinforce existing inequalities or disrupt them?

Tension Point: Framing surveillance and behavior modification as "service enhancements."

Use Case: Educators reject a contract for a behavior-monitoring AI after discovering it shares metadata with law enforcement and insurers.


4. Epistemic Lens

Core Question: How do we know what we think we know?

This lens addresses the knowledge claims, interpretive frameworks, and cognitive biases embedded in AI systems.

Diagnostic Questions:

  • What does the model assume is true about the world?
  • Are alternate ways of knowing (e.g. oral tradition, local experience) acknowledged or erased?
  • How is uncertainty communicated — or concealed?

Tension Point: Mistaking fluency or coherence for truth.

Use Case: A civil society group evaluates multiple AI-generated policy summaries and discovers subtle distortions tied to ideological priors. A hybrid human–AI review process is introduced.


Prototype Case Study (All Lenses in Action)

Scenario: A regional climate alliance considers deploying a language model to draft rapid-response communications during climate emergencies.

  • Moral Lens: Aims to prevent panic and protect vulnerable communities. But who defines “panic”? Messaging must balance urgency with respect.
  • Operational Lens: Tests reveal the model struggles with dialectal variation and lacks accurate local data. Reliability in edge cases is low.
  • Political Lens: The system is licensed through a global tech firm with rights to retain and fine-tune on the region’s prompts.
  • Epistemic Lens: The LLM often reframes floods and fires in property-centric terms, underplaying Indigenous ecological perspectives.

Outcome: The alliance slows deployment, convenes multi-stakeholder reviews, and prototypes a local fine-tune with community oversight.


Integration with the Ardens Project Framework

AI Compass & Project Framework — A Dual Architecture for Intelligence-in-Action

The AI Compass provides orientation, discernment, and strategic coherence. The Ardens Project Management Framework ensures coordination, acceleration, and real-world traction.

Together, they enable:

  • Ethically grounded action
  • Coherent tactical planning
  • Role-aware team dynamics
  • Reflective iteration and learning
Compass Framework Synergy
Ethical clarity Mission execution Avoids hollow optimization
System lensing Tactical planning Delivers without blindness
Power awareness Role structure Maintains internal sovereignty
Knowledge discipline Iterative checkpoints Filters drift & hallucination

The Compass asks better questions. The Framework drives better action.


A closing note

The AI Compass does not dictate answers. It provokes discernment, deliberation, and design choices rooted in awareness. In times of speed and scale, these are among our last remaining acts of sovereignty.


Category: Processes & Methods