guidelines - zfifteen/unified-framework GitHub Wiki
These guidelines ensure that all contributions to the Z Framework maintain the highest standards of mathematical accuracy, computational precision, and scientific rigor required for research-grade software.
- All mathematical claims must be supported by rigorous proofs or empirical validation
- Statistical significance required: p < 10⁻⁶ for all empirical claims
- High-precision arithmetic mandatory: mpmath with dps=50+ decimal places
- Independent verification encouraged for all major contributions
- Comprehensive unit testing with >90% coverage for new code
- Performance benchmarking required for optimization changes
- Clear, documented interfaces following framework patterns
- Robust error handling for edge cases and numerical instabilities
- LaTeX formatting for all mathematical expressions
- Complete cross-references between related documentation sections
- Practical examples included for all new features
- Regular accuracy reviews and updates
Requirements:
- Advanced mathematical background in relevant domains
- Rigorous proof methodology
- Statistical validation for empirical claims
- Framework consistency verification
Submission Process:
- Mathematical formulation with complete proofs
- Implementation with high-precision arithmetic
- Empirical validation with statistical significance testing
- Documentation with LaTeX-formatted equations
- Peer review by qualified mathematicians
Validation Standards:
# Example validation pattern
import mpmath as mp
mp.mp.dps = 50 # Required precision
def validate_mathematical_claim(theory_function, empirical_data):
"""Validate mathematical theory against empirical data"""
# Theoretical prediction
prediction = theory_function(empirical_data)
# Statistical validation
from scipy import stats
t_stat, p_value = stats.ttest_1samp(prediction, expected_value)
# Significance requirement
assert p_value < 1e-6, f"Insufficient significance: p={p_value}"
# Precision requirement
precision_error = abs(prediction - expected_value)
assert precision_error < 1e-16, f"Precision error: {precision_error}"
return True
Requirements:
- Numerical stability analysis
- Computational complexity assessment
- Performance benchmarking
- Memory efficiency evaluation
Implementation Standards:
def framework_algorithm(parameters):
"""Template for framework algorithm implementation"""
# Input validation
if not validate_inputs(parameters):
raise ValueError("Invalid input parameters")
# High-precision calculation
import mpmath as mp
mp.mp.dps = 50
try:
# Algorithm implementation
result = high_precision_calculation(parameters)
# Result validation
if not validate_result(result):
raise ValueError("Invalid result computed")
return result
except Exception as e:
# Robust error handling
logger.error(f"Algorithm error: {e}")
raise
Architecture Requirements:
- Follow existing framework patterns
- Maintain backward compatibility
- Comprehensive API documentation
- Integration test coverage
Code Standards:
class FrameworkExtension:
"""Template for framework extension classes"""
def __init__(self, config):
"""Initialize extension with configuration"""
self.validate_config(config)
self.setup_precision()
def validate_config(self, config):
"""Validate configuration parameters"""
required_params = ['precision', 'validation_threshold']
for param in required_params:
if param not in config:
raise ValueError(f"Missing required parameter: {param}")
def setup_precision(self):
"""Set up high-precision arithmetic"""
import mpmath as mp
mp.mp.dps = 50
Benchmarking Requirements:
- Baseline performance measurement
- Optimization impact quantification
- Memory usage analysis
- Scalability testing
Example Benchmark:
import time
import memory_profiler
def benchmark_optimization(func, test_data, iterations=1000):
"""Benchmark function performance"""
# Memory usage
mem_before = memory_profiler.memory_usage()[0]
# Timing
start_time = time.time()
for _ in range(iterations):
result = func(test_data)
end_time = time.time()
mem_after = memory_profiler.memory_usage()[0]
return {
'avg_time': (end_time - start_time) / iterations,
'memory_usage': mem_after - mem_before,
'result_sample': result
}
Coverage Standards:
- Unit tests: >90% code coverage
- Integration tests: All module interactions
- Performance tests: Benchmark critical functions
- Validation tests: Mathematical correctness verification
Test Template:
import unittest
import mpmath as mp
class TestFrameworkFunction(unittest.TestCase):
def setUp(self):
"""Set up test environment"""
mp.mp.dps = 50
self.test_data = generate_test_data()
self.expected_results = load_expected_results()
def test_mathematical_correctness(self):
"""Test mathematical correctness"""
result = framework_function(self.test_data)
expected = self.expected_results['mathematical']
# High-precision comparison
precision_error = abs(result - expected)
self.assertLess(precision_error, mp.mpf('1e-16'))
def test_statistical_significance(self):
"""Test statistical significance"""
results = [framework_function(data) for data in self.test_data]
# Statistical validation
from scipy import stats
t_stat, p_value = stats.ttest_1samp(results, expected_mean)
self.assertLess(p_value, 1e-6)
def test_edge_cases(self):
"""Test edge cases and error handling"""
with self.assertRaises(ValueError):
framework_function(invalid_input)
# Boundary conditions
boundary_result = framework_function(boundary_input)
self.assertTrue(validate_boundary_result(boundary_result))
Technical Writing:
- Clear, concise explanations
- Logical organization and flow
- Appropriate technical depth for audience
- Regular review and updates
Mathematical Documentation:
- LaTeX formatting for all equations
- Complete derivations for complex formulas
- Cross-references to related concepts
- Practical implementation examples
Code Documentation:
- Comprehensive docstrings
- Parameter and return value descriptions
- Usage examples
- Error condition documentation
def framework_function(parameter_a, parameter_b, precision=50):
"""
Calculate framework-specific transformation.
This function implements the core Z Framework transformation
using high-precision arithmetic to ensure numerical stability.
Mathematical Foundation:
The transformation follows the universal form:
.. math::
Z = A \cdot \left(\frac{B}{c}\right)
Where:
- A: Frame-dependent measured quantity
- B: Rate or transformation parameter
- c: Universal invariant (speed of light)
Parameters
----------
parameter_a : float or mpmath.mpf
Frame-dependent quantity A in the universal form
parameter_b : float or mpmath.mpf
Rate parameter B in the universal form
precision : int, optional
Decimal precision for calculations (default: 50)
Returns
-------
mpmath.mpf
Transformed value Z with specified precision
Raises
------
ValueError
If parameters violate physical constraints
TypeError
If parameters are not numeric types
Examples
--------
>>> import mpmath as mp
>>> mp.mp.dps = 50
>>> result = framework_function(1.0, 1e6)
>>> print(f"Z = {result}")
Z = 3.33564095198e-03
See Also
--------
universal_form : Basic universal form calculation
validate_parameters : Parameter validation utility
References
----------
.. [1] Z Framework Documentation, Mathematical Model Section
.. [2] Core Principles, Universal Invariance Axiom
"""
# Implementation with documentation standards
pass
-
Pre-submission Checklist:
- All tests pass
- Documentation complete
- Performance benchmarks run
- Mathematical validation completed
-
Initial Review (1 week):
- Code quality assessment
- Documentation review
- Basic functionality testing
-
Technical Review (2-3 weeks):
- Mathematical correctness verification
- Performance impact analysis
- Integration testing
- Security review
-
Peer Review (2-4 weeks):
- Community feedback
- Independent validation
- Expert domain review
- Reproducibility verification
-
Final Approval (1-2 weeks):
- Maintainer review
- Integration testing
- Documentation finalization
- Merge approval
Mathematical Accuracy:
- Proofs verified by independent reviewers
- Empirical claims validated with p < 10⁻⁶
- Numerical precision maintained (Δₙ < 10⁻¹⁶)
- Cross-validation across multiple methods
Code Quality:
- Comprehensive test coverage (>90%)
- Performance benchmarks meet standards
- Clear, documented interfaces
- Robust error handling
Documentation Quality:
- Complete technical documentation
- Practical examples included
- Cross-references properly linked
- Mathematical notation properly formatted
# Always set precision early
import mpmath as mp
mp.mp.dps = 50
# Use mpmath for all critical calculations
def critical_calculation(x):
return mp.exp(mp.log(x) * mp.pi)
# Validate precision maintenance
def validate_precision(result, expected, tolerance=mp.mpf('1e-16')):
error = abs(result - expected)
assert error < tolerance, f"Precision loss: {error}"
def validate_statistical_claim(data, expected, alpha=1e-6):
"""Validate statistical claim with required significance"""
from scipy import stats
# Choose appropriate statistical test
if len(data) > 30:
stat, p_value = stats.ttest_1samp(data, expected)
else:
stat, p_value = stats.wilcoxon(data - expected)
# Verify significance
assert p_value < alpha, f"Insufficient significance: p={p_value}"
# Report effect size
effect_size = (np.mean(data) - expected) / np.std(data)
return p_value, effect_size
# Cache expensive calculations
from functools import lru_cache
@lru_cache(maxsize=10000)
def expensive_calculation(n):
"""Cached calculation for performance"""
return complex_mathematical_operation(n)
# Use vectorization when possible
import numpy as np
def vectorized_operation(data):
"""Vectorized operation for performance"""
return np.vectorize(framework_function)(data)
def robust_framework_function(parameters):
"""Robust framework function with comprehensive error handling"""
try:
# Validate inputs
validated_params = validate_and_convert(parameters)
# Perform calculation
result = core_calculation(validated_params)
# Validate output
if not is_valid_result(result):
raise ValueError("Invalid result computed")
return result
except (ValueError, TypeError) as e:
logger.error(f"Parameter error: {e}")
raise
except Exception as e:
logger.error(f"Unexpected error: {e}")
raise RuntimeError(f"Calculation failed: {e}")
class FrameworkConfig:
"""Framework configuration management"""
DEFAULT_PRECISION = 50
DEFAULT_VALIDATION_THRESHOLD = 1e-6
def __init__(self, **kwargs):
self.precision = kwargs.get('precision', self.DEFAULT_PRECISION)
self.threshold = kwargs.get('threshold', self.DEFAULT_VALIDATION_THRESHOLD)
self.validate_config()
def validate_config(self):
"""Validate configuration parameters"""
if self.precision < 16:
raise ValueError("Minimum precision is 16 decimal places")
if self.threshold >= 1e-3:
raise ValueError("Validation threshold too high")
- Review existing documentation thoroughly
- Search previous issues and discussions
- Provide complete, minimal examples
- Include system information and versions
- Use descriptive titles
- Include reproduction steps
- Provide expected vs actual behavior
- Attach relevant logs and error messages
- Discuss on GitHub Discussions first
- Provide clear use case justification
- Consider implementation complexity
- Be open to alternative approaches
Guidelines Version: 2.1
Last Updated: August 2025
Next Review: February 2026