Community Values - 80-20-Human-In-The-Loop/Community GitHub Wiki
Community Values: The Three Pillars
Wisdom, Integrity, and Compassion form an unbreakable triangle of human will that we must preserve in our systems.
These three values represent something AI can never fully replicate. They require lived experience, moral responsibility, and genuine care for others. Together, they form the foundation of every decision we make, every tool we build, and every standard we set.
The Triangle of Irreplaceable Human Values
Wisdom: Understanding Beyond Data
What AI provides: "Based on 10 million data points, this approach has a 73% success rate."
What human wisdom provides: "I've seen this pattern before. The 27% of failures all happened when teams ignored early warning signs that aren't in the data. Here's what to watch for..."
Wisdom is not information. It's understanding consequences that ripple beyond immediate metrics.
Wisdom in Practice:
- Learning from both success and failure
- Recognizing patterns that transcend data points
- Understanding second and third-order effects
- Knowing when rules need to be broken
- Seeing the human story behind the numbers
How We Apply Wisdom:
In Code Reviews: We don't just check for bugs. We ask: "What might go wrong that we haven't considered? What did we learn from similar decisions?"
In Tool Design: Storm Checker doesn't just fix type errors - it teaches type safety. We build tools that transfer wisdom, not just solve problems.
In Community Discussions: We welcome changing our minds based on new evidence. Wisdom grows through dialogue, not dogma.
Integrity: Responsibility Beyond Capability
What AI provides: "I can optimize this system for maximum engagement using these psychological triggers."
What human integrity provides: "We could exploit these vulnerabilities, but should we? What are our obligations to the people who trust us?"
Integrity means taking responsibility for what should be built, not just what can be built.
Integrity in Practice:
- Taking ownership of outcomes, not just outputs
- Saying no when something is technically possible but ethically wrong
- Being transparent about limitations and uncertainties
- Admitting mistakes and learning from them
- Doing the right thing when no one is watching
How We Apply Integrity:
In Documentation: We admit what our tools can't do, not just what they can. We're transparent about tradeoffs and limitations.
In AI Usage: We use AI to amplify human capability, not replace human responsibility. We own every outcome of AI-assisted decisions.
In Open Source: We share knowledge freely and build in public. Good ideas should benefit everyone, not just those who can pay.
Compassion: Care Beyond Optimization
What AI provides: "Users report 15% less task completion time with this interface."
What human compassion provides: "I remember feeling stupid when software made me feel incompetent. Let's design this so people feel capable and confident, not just efficient."
Compassion is caring about the human experience behind the metrics.
Compassion in Practice:
- Designing for dignity, not just usability
- Understanding that efficiency without empathy is often cruelty
- Recognizing when someone needs help vs. space
- Building for the struggling learner, not just the expert
- Making decisions based on care, not just optimization
How We Apply Compassion:
In Design: We design for the person learning to code in a refugee camp, not just Silicon Valley developers with fiber internet.
In Teaching: We explain at the learner's pace, not the teacher's convenience. Every question is valid.
In Community: We celebrate learning, not just achievement. We support each other's growth.
The 80-20 Philosophy in Action
These values guide how we implement the 80-20 principle:
80% Automation with Wisdom
- AI handles repetitive tasks
- But we understand why those tasks exist
- We know what to watch for when automation fails
20% Human Oversight with Integrity
- Humans make critical decisions
- We take responsibility for those decisions
- We don't hide behind "the algorithm decided"
100% Responsibility with Compassion
- We own the human impact of our tools
- We consider who might be harmed
- We optimize for human flourishing, not just efficiency
Living These Values Together
Different Approaches, Same Values
People can embody these values in different ways:
The Teacher Approach: Share your failures and learning process to help others avoid the same mistakes.
The Guardian Approach: Speak up for those who aren't in the room when decisions are made.
The Builder Approach: Create tools that make it easier for others to act with wisdom, integrity, and compassion.
The Questioner Approach: Ask hard questions that help reveal hidden assumptions and biases.
The Researcher Approach: Study long-term effects to inform better practices.
Why These Values Matter Now More Than Ever
In a world where AI can generate code, write documentation, and simulate empathy, these three values become more precious, not less:
Wisdom helps us navigate territory where no training data exists.
Integrity ensures we take responsibility for systems we deploy but don't fully understand.
Compassion keeps human welfare central when it's easier to optimize for metrics.
These aren't just nice-to-have principles. They're essential competencies for anyone building technology in an AI-powered world.
Practical Examples
Scenario: Implementing an AI Code Generator
Without Our Values:
- Ship it fast because it works 95% of the time
- Let users figure out when it fails
- Optimize for lines of code generated
With Our Values:
- Wisdom: Understand the 5% failure cases and their consequences
- Integrity: Clearly mark AI-generated code and its limitations
- Compassion: Ensure juniors learn principles, not just copy-paste
Scenario: Building a Performance Tool
Without Our Values:
- Show metrics and let users figure it out
- Optimize for speed of analysis
- Focus on expert users who already understand
With Our Values:
- Wisdom: Explain why performance matters, not just what's slow
- Integrity: Admit when our analysis might be wrong
- Compassion: Teach concepts so beginners can grow
Your Role in These Values
For Developers
- Write code that others can understand and maintain
- Document not just what, but why
- Consider who will deal with your technical debt
For Designers
- Design for the most vulnerable users
- Make failure states dignified, not shameful
- Consider cognitive load, not just visual appeal
For Leaders
- Make decisions based on long-term human impact
- Take responsibility for what your team builds
- Create space for ethical discussions
For Everyone
- Ask "should we?" not just "can we?"
- Share your concerns about AI's direction
- Support others in living these values
Join Us in Living These Values
These values aren't perfection we've achieved - they're aspirations we work toward every day.
Share Your Story: When has wisdom, integrity, or compassion guided you to a better decision?
Challenge Our Thinking: What blind spots do you see? How can we better embody these values?
Propose Additions: What other uniquely human qualities should guide our work?
The Choice Before Us
Every day, we choose between:
The Easy Path: Let AI handle everything, absolve ourselves of responsibility, optimize for metrics
The Right Path: Preserve human wisdom, take responsibility for outcomes, optimize for human flourishing
The future will reflect the values we choose today. Let's make sure they're values we're proud to pass on.
These values guide every decision, every tool, and every interaction in our community. They're not rules to follow, but principles to embody as we build a future where technology amplifies the best of humanity.