Contributing - ruvnet/ruv-FANN GitHub Wiki
Welcome to the ruv-FANN Neural Intelligence Framework! We're excited that you're interested in contributing to our project. This guide will help you get started with contributing to any of our three core components: ruv-FANN Core, Neuro-Divergent, and ruv-swarm.
- Code of Conduct
- Getting Started
- Development Setup
- Code Style Guidelines
- Pull Request Process
- Testing Requirements
- Community Guidelines
- Project Structure
- Contribution Areas
- Performance Guidelines
- Documentation Standards
We are committed to providing a welcoming and inclusive environment for all contributors. By participating in this project, you agree to abide by our code of conduct:
- Be respectful: Treat all community members with respect and kindness
- Be inclusive: Welcome newcomers and encourage diverse perspectives
- Be constructive: Provide helpful feedback and criticism
- Be collaborative: Work together to achieve common goals
- Be professional: Maintain professionalism in all interactions
Before contributing, ensure you have the following installed:
- Rust 1.81+: Install via rustup
- Node.js 18+: For JavaScript/NPM components
- Git: For version control
-
wasm-pack: For WebAssembly builds
curl https://rustwasm.github.io/wasm-pack/installer/init.sh -sSf | sh
-
Fork the Repository
# Fork the repo on GitHub, then clone your fork git clone https://github.com/your-username/ruv-FANN.git cd ruv-FANN
-
Set Up Remote
git remote add upstream https://github.com/ruvnet/ruv-FANN.git
-
Initialize ruv-swarm (Recommended)
# Use our swarm-based contribution system npx ruv-swarm@latest init --github-swarm npx ruv-swarm contribute --type "feature|bug|docs"
# Build the main library
cargo build --workspace
cargo test --workspace
# Run with different feature flags
cargo build --features "gpu,parallel,wasm"
cargo test --features "test-all"
cd neuro-divergent
cargo build --release
cargo test --release
# Run benchmarks
cargo bench
cd ruv-swarm
# Rust components
cargo build --workspace
cargo test --workspace
# NPM components
cd npm
npm install
npm test
npm run build:wasm
VS Code Extensions (Recommended):
- rust-analyzer
- CodeLLDB (debugging)
- Better TOML
- crates
- Error Lens
Settings for optimal development:
{
"rust-analyzer.cargo.features": ["default", "gpu", "parallel"],
"rust-analyzer.check.command": "clippy",
"editor.formatOnSave": true
}
We follow the official Rust style guide with project-specific conventions:
/// Calculates attention weights using Cartan matrix semantics.
///
/// This function implements the core semantic attention mechanism
/// that enables neural networks to understand mathematical structures.
///
/// # Arguments
/// * `input` - Input tensor with shape [batch_size, seq_len, dim]
/// * `cartan_matrix` - Cartan matrix defining the root system
///
/// # Returns
/// Attention weights tensor with shape [batch_size, heads, seq_len, seq_len]
///
/// # Example
/// ```
/// use ruv_fann::attention::semantic_attention;
/// let weights = semantic_attention(&input, &cartan_matrix)?;
/// ```
pub fn semantic_attention(
input: &Tensor,
cartan_matrix: &CartanMatrix,
) -> Result<Tensor, AttentionError> {
// Implementation...
}
// Use proper error types
#[derive(Debug, thiserror::Error)]
pub enum NeuralError {
#[error("Invalid tensor dimensions: expected {expected}, got {actual}")]
InvalidDimensions { expected: usize, actual: usize },
#[error("WASM module failed to load: {0}")]
WasmLoadError(#[from] wasmtime::Error),
#[error("GPU memory allocation failed")]
GpuMemoryError,
}
// Use Result types consistently
fn train_network(data: &TrainingData) -> Result<Network, NeuralError> {
let weights = initialize_weights(data.input_size())?;
let network = Network::new(weights)?;
Ok(network)
}
// Use SIMD when available
#[cfg(target_feature = "simd128")]
fn vectorized_operation(data: &[f32]) -> Vec<f32> {
use std::simd::*;
// SIMD implementation
}
#[cfg(not(target_feature = "simd128"))]
fn vectorized_operation(data: &[f32]) -> Vec<f32> {
// Fallback implementation
}
// Prefer rayon for parallelization
use rayon::prelude::*;
fn parallel_training(batches: &[TrainingBatch]) -> Vec<ModelUpdate> {
batches
.par_iter()
.map(|batch| train_batch(batch))
.collect()
}
// Use appropriate data structures
use std::collections::HashMap;
use indexmap::IndexMap; // For ordered maps
use smallvec::SmallVec; // For small vectors
// Manage GPU memory carefully
struct GpuBuffer {
buffer: wgpu::Buffer,
size: u64,
}
impl Drop for GpuBuffer {
fn drop(&mut self) {
// Ensure proper cleanup
self.buffer.destroy();
}
}
Always run before committing:
# Format all code
cargo fmt --all
# Check lints
cargo clippy --workspace --all-targets -- -D warnings
# Fix common issues
cargo fix --workspace --allow-dirty
For Significant Changes:
- Create or find a related issue
- Discuss your approach in the issue
- Get approval from maintainers for large features
For Bug Fixes:
- Reproduce the bug and document steps
- Create a failing test that demonstrates the issue
- Implement the fix
- Verify the test now passes
Branch Naming:
# Feature branches
git checkout -b feature/semantic-cartan-attention
git checkout -b feature/gpu-memory-optimization
# Bug fixes
git checkout -b fix/wasm-loading-error
git checkout -b fix/memory-leak-in-swarm
# Documentation
git checkout -b docs/api-reference-update
git checkout -b docs/contributing-guide
Commit Message Format:
We use Conventional Commits:
type(scope): description
[optional body]
[optional footer]
Types:
-
feat
: New features -
fix
: Bug fixes -
docs
: Documentation changes -
style
: Code style changes -
refactor
: Code refactoring -
perf
: Performance improvements -
test
: Test additions/changes -
chore
: Maintenance tasks
Examples:
feat(attention): implement semantic Cartan matrix attention
Add new attention mechanism based on Cartan matrix semantics.
This enables neural networks to understand mathematical structures
in the input data.
- Implements CartanAttentionHead struct
- Adds comprehensive tests for all root systems
- Includes benchmarks showing 15% speed improvement
- Updates documentation with usage examples
Closes #123
fix(wasm): resolve module loading race condition
Fix race condition in WASM module initialization that caused
intermittent failures in multi-threaded environments.
The issue occurred when multiple threads tried to initialize
the same module simultaneously. Added proper synchronization
using Arc<Mutex<>> around the module cache.
Fixes #456
Before Submitting PR:
# Run all tests
cargo test --workspace --all-features
# Run specific component tests
cargo test -p ruv-fann --features "gpu,parallel"
cargo test -p neuro-divergent --release
cd ruv-swarm && npm test
# Run benchmarks to check performance
cargo bench
# Build documentation
cargo doc --workspace --no-deps
# Check WASM builds
wasm-pack build --target web
PR Template:
## Description
Brief description of the changes and their purpose.
## Type of Change
- [ ] Bug fix (non-breaking change that fixes an issue)
- [ ] New feature (non-breaking change that adds functionality)
- [ ] Breaking change (fix or feature that causes existing functionality to not work as expected)
- [ ] Performance improvement
- [ ] Documentation update
## Component(s) Affected
- [ ] ruv-FANN Core
- [ ] Neuro-Divergent
- [ ] ruv-swarm
- [ ] Semantic Cartan Matrix
- [ ] Documentation
## Testing
- [ ] Unit tests pass
- [ ] Integration tests pass
- [ ] Performance benchmarks run
- [ ] WASM builds successfully
- [ ] Manual testing completed
## Performance Impact
- [ ] No performance impact
- [ ] Performance improved (include benchmark results)
- [ ] Performance regression (justified and documented)
## Breaking Changes
List any breaking changes and migration steps.
## Checklist
- [ ] Code follows style guidelines
- [ ] Self-review completed
- [ ] Documentation updated
- [ ] Tests added for new functionality
- [ ] All CI checks pass
- [ ] Linked to relevant issues
Closes #(issue number)
- Automated Checks: CI/CD pipeline runs all tests
- Code Review: Maintainers review code quality and design
- Testing: QA testing for significant features
- Documentation Review: Ensure docs are updated
- Performance Review: Check benchmark impacts
- Final Approval: Maintainer approval for merge
tests/
âââ unit/ # Unit tests for individual components
â âââ neural/ # Neural network core tests
â âââ attention/ # Attention mechanism tests
â âââ swarm/ # Swarm coordination tests
â âââ wasm/ # WASM-specific tests
âââ integration/ # Cross-component integration tests
âââ benchmarks/ # Performance benchmarks
âââ property/ # Property-based tests with proptest
âââ fixtures/ # Test data and utilities
#[cfg(test)]
mod tests {
use super::*;
use approx::assert_relative_eq;
use proptest::prelude::*;
#[test]
fn test_semantic_attention_basic() {
let input = Tensor::zeros(&[2, 10, 64]);
let cartan = CartanMatrix::a_series(3);
let result = semantic_attention(&input, &cartan).unwrap();
assert_eq!(result.shape(), &[2, 8, 10, 10]);
}
#[test]
fn test_attention_weights_sum_to_one() {
let input = Tensor::random(&[1, 5, 32]);
let cartan = CartanMatrix::b_series(2);
let weights = semantic_attention(&input, &cartan).unwrap();
let sums = weights.sum_dim(-1);
for &sum in sums.iter() {
assert_relative_eq!(sum, 1.0, epsilon = 1e-6);
}
}
proptest! {
#[test]
fn test_attention_properties(
batch_size in 1..10usize,
seq_len in 1..50usize,
dim in 16..128usize,
) {
let input = Tensor::random(&[batch_size, seq_len, dim]);
let cartan = CartanMatrix::a_series(3);
let result = semantic_attention(&input, &cartan);
prop_assert!(result.is_ok());
let weights = result.unwrap();
prop_assert_eq!(weights.shape()[0], batch_size);
prop_assert_eq!(weights.shape()[2], seq_len);
}
}
}
// tests/integration/neural_pipeline.rs
use ruv_fann::prelude::*;
#[tokio::test]
async fn test_full_training_pipeline() {
let config = NetworkConfig::default()
.with_layers(&[784, 128, 64, 10])
.with_activation(ActivationFunction::ReLU)
.with_cartan_attention(true);
let mut network = Network::new(config).unwrap();
let training_data = load_test_dataset().unwrap();
// Train the network
let trainer = Trainer::new()
.with_learning_rate(0.001)
.with_batch_size(32);
let metrics = trainer.train(&mut network, &training_data).await.unwrap();
assert!(metrics.final_accuracy > 0.8);
assert!(metrics.convergence_epochs < 100);
}
// benches/attention_benchmarks.rs
use criterion::{black_box, criterion_group, criterion_main, Criterion, BenchmarkId};
use ruv_fann::attention::*;
fn benchmark_semantic_attention(c: &mut Criterion) {
let mut group = c.benchmark_group("semantic_attention");
for seq_len in [32, 64, 128, 256].iter() {
group.bench_with_input(
BenchmarkId::new("cartan_attention", seq_len),
seq_len,
|b, &seq_len| {
let input = Tensor::random(&[1, seq_len, 64]);
let cartan = CartanMatrix::a_series(4);
b.iter(|| {
semantic_attention(black_box(&input), black_box(&cartan))
});
},
);
}
group.finish();
}
criterion_group!(benches, benchmark_semantic_attention);
criterion_main!(benches);
- Minimum Coverage: 80% line coverage
- Critical Paths: 95% coverage for core algorithms
- Public APIs: 100% coverage for all public functions
- Error Paths: All error conditions must be tested
- GitHub Issues: Bug reports, feature requests, and technical discussions
- GitHub Discussions: General questions, ideas, and community chat
- Discord: Real-time collaboration and support (if available)
- Twitter/X: Project updates and announcements
- Check Documentation: Review the wiki and API docs
- Search Issues: Look for existing discussions on your topic
- Ask in Discussions: Use GitHub Discussions for questions
- Join Discord: Get real-time help from the community
Bug Reports:
**Bug Description**
Clear, concise description of the bug.
**Steps to Reproduce**
1. Step one
2. Step two
3. See error
**Expected Behavior**
What you expected to happen.
**Actual Behavior**
What actually happened.
**Environment**
- OS: [e.g., Ubuntu 22.04]
- Rust version: [e.g., 1.81]
- ruv-FANN version: [e.g., 0.1.6]
- GPU: [if applicable]
**Additional Context**
Any other relevant information.
Feature Requests:
**Feature Description**
What feature would you like to see added?
**Use Case**
Why is this feature needed? What problem does it solve?
**Proposed Implementation**
If you have ideas about implementation, share them here.
**Alternatives Considered**
What alternatives have you considered?
ruv-FANN/
âââ src/ # Core ruv-FANN library
â âââ neural/ # Neural network implementations
â âââ activation/ # Activation functions
â âââ training/ # Training algorithms
â âââ io/ # Input/output utilities
â âââ webgpu/ # GPU acceleration
âââ neuro-divergent/ # Advanced forecasting models
â âââ src/ # Model implementations
â âââ examples/ # Usage examples
â âââ tests/ # Model-specific tests
âââ ruv-swarm/ # Distributed swarm intelligence
â âââ src/ # Rust implementation
â âââ npm/ # NPM package
â âââ docs/ # Swarm documentation
âââ Semantic_Cartan_Matrix/ # Mathematical foundation
â âââ micro_cartan_attn/ # Attention mechanisms
â âââ micro_core/ # Core operations
â âââ micro_swarm/ # Swarm coordination
â âââ micro_routing/ # Routing algorithms
âââ cuda-wasm/ # CUDA to WASM transpiler
âââ simulator/ # Neural chip simulation
âââ examples/ # Cross-project examples
What to work on:
- Activation functions
- Training algorithms
- Network architectures
- GPU acceleration
- WASM bindings
Skills needed:
- Rust programming
- Neural network theory
- GPU programming (optional)
- WebAssembly (optional)
What to work on:
- LSTM implementations
- Transformer models
- Time series analysis
- Model optimization
- Python compatibility
Skills needed:
- Rust programming
- Machine learning
- Time series analysis
- Mathematical modeling
What to work on:
- Agent coordination
- Topology optimization
- Distributed algorithms
- CLI tools
- MCP integration
Skills needed:
- Rust programming
- JavaScript/TypeScript
- Distributed systems
- Graph theory
What to work on:
- Lie algebra implementations
- Root system computations
- Orthogonalization algorithms
- Mathematical optimizations
Skills needed:
- Advanced mathematics
- Rust programming
- Linear algebra
- Group theory
What to work on:
- API documentation
- Tutorials and guides
- Code examples
- Performance benchmarks
Skills needed:
- Technical writing
- Programming in Rust/JS
- Understanding of the domain
- Measure First: Always profile before optimizing
- Focus on Hot Paths: Optimize the most frequently called code
- Memory Efficiency: Minimize allocations in critical paths
- Vectorization: Use SIMD instructions where possible
- Parallelization: Leverage multi-core processing with rayon
// Always include benchmarks for performance-critical code
use criterion::{criterion_group, criterion_main, Criterion};
fn benchmark_critical_function(c: &mut Criterion) {
let data = setup_benchmark_data();
c.bench_function("critical_function", |b| {
b.iter(|| {
critical_function(black_box(&data))
});
});
}
- Training Speed: >1000 samples/second on CPU
- Inference Speed: <1ms per sample for small networks
- Memory Usage: <100MB for typical use cases
- WASM Performance: Within 2x of native performance
All public APIs must be documented with rustdoc:
/// Computes semantic attention using Cartan matrix structure.
///
/// This function implements attention mechanisms that understand
/// the underlying mathematical structure of the data through
/// Cartan matrices from Lie algebra theory.
///
/// # Parameters
///
/// * `query` - Query tensor with shape [batch, seq_len, d_model]
/// * `key` - Key tensor with shape [batch, seq_len, d_model]
/// * `value` - Value tensor with shape [batch, seq_len, d_model]
/// * `cartan_matrix` - Cartan matrix defining the root system
///
/// # Returns
///
/// Returns the attention output tensor with shape [batch, seq_len, d_model]
///
/// # Errors
///
/// Returns `AttentionError` if:
/// - Input tensors have incompatible shapes
/// - Cartan matrix has invalid dimensions
/// - Memory allocation fails
///
/// # Examples
///
/// ```rust
/// use ruv_fann::attention::{semantic_attention, CartanMatrix};
/// use ruv_fann::tensor::Tensor;
///
/// let query = Tensor::random(&[2, 10, 64]);
/// let key = Tensor::random(&[2, 10, 64]);
/// let value = Tensor::random(&[2, 10, 64]);
/// let cartan = CartanMatrix::a_series(3);
///
/// let output = semantic_attention(&query, &key, &value, &cartan)?;
/// assert_eq!(output.shape(), &[2, 10, 64]);
/// ```
///
/// # Performance
///
/// This function is optimized for:
/// - SIMD vectorization on x86_64
/// - Parallel computation across attention heads
/// - Memory-efficient implementation with minimal allocations
///
/// Typical performance: ~50 Ξs for sequence length 128 on modern CPUs.
- Clear Examples: Every feature should have working examples
- Common Patterns: Document typical usage patterns
- Troubleshooting: Address common issues and solutions
- Migration Guides: Help users upgrade between versions
When adding major new components:
- Design Document: Create a detailed design in issues
- API Discussion: Get feedback on proposed APIs
- Prototype: Implement a minimal working version
- Testing: Comprehensive test suite
- Documentation: Full documentation and examples
- Integration: Ensure compatibility with existing code
For performance-critical contributions:
- Baseline Benchmarks: Establish current performance
- Optimization Implementation: Make targeted improvements
- Measurement: Verify improvements with benchmarks
- Regression Tests: Ensure no performance regressions
- Documentation: Document the optimization techniques
For changes that break existing APIs:
- Deprecation Period: Mark old APIs as deprecated
- Migration Guide: Provide clear upgrade instructions
- Compatibility Layer: Maintain compatibility when possible
- Version Bump: Follow semantic versioning
- Communication: Announce changes to the community
Contributing to ruv-FANN means joining a community dedicated to making neural intelligence accessible, composable, and precise. Whether you're fixing a typo, implementing a new feature, or optimizing performance, every contribution matters.
Ready to contribute? Start by exploring our open issues or join our community discussions!
Built with âĪïļ and ðĶ by the rUv community