FAQ - ruvnet/ruv-FANN GitHub Wiki

Frequently Asked Questions (FAQ)

Your complete guide to understanding and using ruv-FANN, the neural intelligence framework.

๐Ÿš€ Getting Started

Q: What is ruv-FANN and how is it different from other neural network libraries?

A: ruv-FANN is a comprehensive neural intelligence framework that focuses on:

  • Ephemeral Intelligence: Neural networks created on-demand, used, then dissolved
  • CPU-native Performance: Optimized for systems without GPUs using SIMD acceleration
  • Swarm Intelligence: Distributed agent coordination achieving 84.8% SWE-Bench solve rate
  • WebAssembly Ready: Runs everywhere from browsers to embedded systems
  • Mathematical Rigor: Based on Cartan matrix theory and Lie algebra

Unlike TensorFlow or PyTorch which are general-purpose ML frameworks, ruv-FANN specializes in lightweight, purpose-built neural architectures that can be instantiated quickly for specific tasks.

Q: What are the minimum system requirements?

A: Minimum:

  • Rust 1.70+
  • 2GB RAM
  • x86_64 or ARM64 processor
  • Any OS (Linux, macOS, Windows)

Recommended:

  • Rust 1.75+
  • 8GB RAM
  • Modern CPU with AVX2 support
  • SSD storage for better I/O performance

Optional but beneficial:

  • GPU with CUDA/OpenCL support
  • Node.js 18+ for swarm features
  • Docker for containerized deployment

Q: How do I get started quickly?

A: The fastest way is:

# Install via NPX (no installation required)
npx ruv-swarm@latest init --claude

# Or install globally
npm install -g ruv-swarm
cargo install ruv-fann

# Test installation
npx ruv-swarm test --self-check

Then follow our Quick Start Guide for your first neural network in 5 minutes.

๐Ÿง  Neural Networks

Q: What types of neural networks does ruv-FANN support?

A: ruv-FANN supports 27+ neural architectures:

Classical Networks:

  • Multi-Layer Perceptrons (MLP)
  • Radial Basis Functions (RBF)
  • Cascade Correlation Networks

Modern Architectures:

  • Long Short-Term Memory (LSTM)
  • Gated Recurrent Units (GRU)
  • Transformer Networks
  • N-BEATS (Neural Basis Expansion Analysis)
  • Temporal Convolutional Networks (TCN)

Specialized Types:

  • Cartan Attention Networks (our innovation)
  • Semantic Cartan Matrix Networks
  • Swarm-Coordinated Ensembles

Q: How does SIMD acceleration work and what performance gains can I expect?

A: SIMD (Single Instruction, Multiple Data) acceleration allows processing multiple data points simultaneously:

Performance Gains:

  • 2.8-4.4x speedup for vectorizable operations
  • 32.3% token reduction in swarm intelligence tasks
  • <100ms decision times for complex reasoning

Technical Details:

  • Uses AVX2 on Intel/AMD (8 floats per instruction)
  • Uses NEON on ARM (4 floats per instruction)
  • Automatic fallback for unsupported CPUs
  • Compile-time feature detection

Enable SIMD:

RUSTFLAGS="-C target-cpu=native" cargo build --release --features simd

Q: Can I use ruv-FANN without a GPU?

A: Absolutely! This is one of ruv-FANN's key advantages:

  • CPU-native design: Optimized for CPU-only environments
  • SIMD acceleration: Leverages CPU vector instructions
  • Memory efficient: Designed for resource-constrained environments
  • GPU optional: GPU support available but not required

Perfect for:

  • Edge computing
  • Embedded systems
  • Development environments without GPUs
  • Cost-sensitive deployments

Q: How do I choose the right network architecture for my problem?

A: Here's a decision guide:

Classification/Regression:

  • Simple: Use MLP with 1-2 hidden layers
  • Complex: Use deeper networks with dropout

Time Series/Forecasting:

  • Seasonal patterns: N-BEATS or TCN
  • Long sequences: LSTM or GRU
  • Real-time: Cartan Attention Networks

Sequence Processing:

  • Text/NLP: Transformer Networks
  • Speech/Audio: TCN or LSTM
  • Multi-modal: Ensemble methods

Quick Selection Helper:

let architecture = match problem_type {
    ProblemType::Classification => Architecture::MLP { layers: vec![64, 32] },
    ProblemType::TimeSeries => Architecture::LSTM { units: 128 },
    ProblemType::Attention => Architecture::CartanAttention { heads: 8 },
    ProblemType::Ensemble => Architecture::SwarmEnsemble { count: 5 },
};

๐Ÿ Swarm Intelligence

Q: What is swarm intelligence and why should I use it?

A: Swarm intelligence coordinates multiple neural networks to solve complex problems:

Benefits:

  • 84.8% SWE-Bench solve rate (vs 70.3% for Claude 3.7)
  • Fault tolerance: Individual agents can fail without affecting the swarm
  • Parallel processing: Multiple tasks executed simultaneously
  • Adaptive learning: Swarm improves through collective experience

Use Cases:

  • Complex reasoning tasks
  • Multi-step problem solving
  • Distributed computing
  • Real-time decision making

Q: How many agents should I use in a swarm?

A: Agent count depends on your use case:

Small Tasks (3-4 agents):

  • Simple classification
  • Single-step decisions
  • Resource-constrained environments

Medium Tasks (5-8 agents):

  • Multi-step reasoning
  • Code analysis and generation
  • Time series forecasting

Large Tasks (8+ agents):

  • Complex system design
  • Multi-modal processing
  • Research and analysis tasks

Auto-scaling:

# Let ruv-swarm decide optimal count
npx ruv-swarm init --auto-scale --max-agents 12

# Manual specification
npx ruv-swarm init --agents 6

Q: What swarm topologies are available?

A: ruv-FANN supports 5 swarm topologies:

Hierarchical: Best for structured problems

  • Queen agent coordinates worker agents
  • Clear command structure
  • Good for project management tasks

Mesh: Best for collaborative tasks

  • All agents communicate with all others
  • High fault tolerance
  • Good for research and analysis

Ring: Best for sequential processing

  • Agents pass information in a circle
  • Low communication overhead
  • Good for pipeline tasks

Star: Best for centralized coordination

  • Central hub manages all communication
  • Simple architecture
  • Good for aggregation tasks

Custom: Define your own topology

  • Flexible agent relationships
  • Application-specific optimization

๐ŸŒ WebAssembly & Integration

Q: How do I use ruv-FANN in a web browser?

A: ruv-FANN compiles to WebAssembly for browser use:

# Build for web
wasm-pack build --target web --features wasm,simd

# Use in HTML
<script type="module">
  import init, { NeuralNetwork } from './pkg/ruv_fann.js';
  
  async function run() {
    await init();
    const net = new NeuralNetwork(3, [4, 2], 1);
    const result = net.run([0.5, 0.3, 0.8]);
    console.log('Prediction:', result);
  }
  
  run();
</script>

Browser Support:

  • Chrome 88+ (full SIMD support)
  • Firefox 89+ (full SIMD support)
  • Safari 15+ (limited SIMD)
  • Edge 88+ (full SIMD support)

Q: Can I use ruv-FANN with Claude Code?

A: Yes! ruv-FANN has native Claude Code integration:

# Install MCP server
npx claude-flow@alpha mcp add ruv-swarm

# Use in Claude Code
# Just mention ruv-swarm in your requests and Claude will automatically use it

Available MCP Tools:

  • mcp__claude-flow__swarm_init - Initialize swarms
  • mcp__claude-flow__agent_spawn - Create specialized agents
  • mcp__claude-flow__task_orchestrate - Coordinate complex tasks
  • mcp__claude-flow__neural_train - Train neural patterns

Q: How do I integrate with existing Python ML workflows?

A: Use our Python bindings:

# Install Python bindings
pip install ruv-swarm-py[ml]

# Use with existing ML libraries
import ruv_swarm
import pandas as pd
from sklearn.preprocessing import StandardScaler

# Create neural network
net = ruv_swarm.NeuralNetwork(
    input_size=10,
    hidden_layers=[64, 32],
    output_size=1
)

# Train with pandas DataFrame
scaler = StandardScaler()
X_scaled = scaler.fit_transform(df[features])
net.train(X_scaled, df[target])

โšก Performance & Optimization

Q: How do I optimize performance for my specific use case?

A: Performance optimization depends on your bottleneck:

CPU-bound tasks:

// Enable all optimizations
RUSTFLAGS="-C target-cpu=native -C opt-level=3" cargo build --release

// Use parallel processing
let trainer = BackpropTrainer::new()
    .parallel(true)
    .simd(true)
    .batch_size(32);

Memory-bound tasks:

// Optimize memory usage
let network = NeuralNetwork::builder()
    .memory_pool_size(1024 * 1024)  // 1MB pool
    .gradient_clipping(1.0)
    .batch_size(16);  // Smaller batches

I/O-bound tasks:

// Use streaming data
let data_loader = StreamingDataLoader::new("data.fann")
    .buffer_size(1000)
    .prefetch(2);

Q: What are the memory requirements for different network sizes?

A: Memory usage scales with network complexity:

Small Networks (< 1k parameters):

  • Memory: ~1-10 MB
  • Training time: Seconds
  • Inference: Microseconds

Medium Networks (1k-100k parameters):

  • Memory: ~10-100 MB
  • Training time: Minutes
  • Inference: Milliseconds

Large Networks (100k+ parameters):

  • Memory: ~100MB-1GB
  • Training time: Hours
  • Inference: ~10ms

Memory optimization:

// Reduce memory usage
let config = NetworkConfig::new()
    .precision(Precision::F16)      // Use half precision
    .quantization(true)             // Enable quantization
    .memory_mapping(true);          // Use memory mapping

Q: How does ruv-FANN compare to TensorFlow/PyTorch in terms of performance?

A: Performance comparison by use case:

Small to Medium Networks:

  • ruv-FANN: 2-4x faster inference, 50-70% less memory
  • TensorFlow/PyTorch: Better for very large models (>1M parameters)

CPU-only Environments:

  • ruv-FANN: Optimized SIMD acceleration, native performance
  • TensorFlow/PyTorch: GPU-focused, CPU performance varies

Deployment Size:

  • ruv-FANN: ~2-5MB WASM binary
  • TensorFlow.js: ~100-500MB
  • PyTorch Mobile: ~50-200MB

Startup Time:

  • ruv-FANN: <10ms initialization
  • TensorFlow/PyTorch: 1-5 seconds initialization

๐Ÿ”ง Development & Customization

Q: How do I create custom activation functions?

A: Implement the ActivationFunction trait:

use ruv_fann::activation::ActivationFunction;

#[derive(Debug, Clone)]
pub struct Swish {
    beta: f32,
}

impl ActivationFunction for Swish {
    fn activate(&self, x: f32) -> f32 {
        x / (1.0 + (-self.beta * x).exp())
    }
    
    fn derivative(&self, x: f32) -> f32 {
        let sigmoid = 1.0 / (1.0 + (-self.beta * x).exp());
        sigmoid + x * sigmoid * (1.0 - sigmoid) * self.beta
    }
}

// Use in network
let mut net = NeuralNetwork::builder()
    .activation_function(Box::new(Swish { beta: 1.0 }))
    .build()?;

Q: Can I create custom training algorithms?

A: Yes, implement the TrainingAlgorithm trait:

use ruv_fann::training::TrainingAlgorithm;

pub struct AdamOptimizer {
    learning_rate: f32,
    beta1: f32,
    beta2: f32,
    epsilon: f32,
    // ... state variables
}

impl TrainingAlgorithm for AdamOptimizer {
    fn train_epoch(&mut self, network: &mut NeuralNetwork, 
                   data: &[TrainingData]) -> f32 {
        // Implement Adam optimization
        // ...
    }
    
    fn update_weights(&mut self, gradients: &[f32], 
                      weights: &mut [f32]) {
        // Implement weight updates
        // ...
    }
}

Q: How do I save and load trained models?

A: Multiple serialization formats supported:

// Save in native format (fastest)
network.save("model.ruv")?;
let network = NeuralNetwork::load("model.ruv")?;

// Save in FANN format (compatibility)
network.save_fann("model.net")?;
let network = NeuralNetwork::load_fann("model.net")?;

// Save in JSON format (human readable)
#[cfg(feature = "serde")]
{
    network.save_json("model.json")?;
    let network = NeuralNetwork::load_json("model.json")?;
}

// Save to bytes for embedding
let bytes = network.to_bytes()?;
let network = NeuralNetwork::from_bytes(&bytes)?;

๐Ÿ› Troubleshooting

Q: My network isn't converging. What should I do?

A: Try these solutions in order:

  1. Check your data:
// Normalize inputs
let scaler = StandardScaler::new();
let normalized_data = scaler.fit_transform(&training_data);

// Check for NaN/Inf values
assert!(!data.iter().any(|&x| x.is_nan() || x.is_infinite()));
  1. Adjust learning parameters:
let trainer = BackpropTrainer::new()
    .learning_rate(0.001)       // Lower learning rate
    .momentum(0.9)              // Add momentum
    .max_epochs(10000)          // More training time
    .desired_error(0.01);       // Relax error tolerance
  1. Try different architecture:
// Add more neurons
let net = NeuralNetwork::builder()
    .hidden_layers(&[128, 64, 32])  // Deeper network
    .dropout(0.2)                   // Prevent overfitting
    .build()?;

Q: Why is my WASM build so large?

A: Optimize WASM size:

# Build with size optimization
wasm-pack build --release --target web --features wasm

# Use wasm-opt for further optimization
wasm-opt -Oz --enable-simd -o pkg/optimized.wasm pkg/ruv_fann_bg.wasm

# Enable compression in web server
# Gzip can reduce size by 60-80%

Size comparison:

  • Debug build: ~10-20MB
  • Release build: ~2-5MB
  • Optimized + gzipped: ~500KB-1MB

Q: How do I debug performance issues?

A: Use built-in profiling tools:

// Enable profiling
let mut net = NeuralNetwork::builder()
    .profiling(true)
    .debug_mode(cfg!(debug_assertions))
    .build()?;

// Train with metrics
let metrics = trainer.train_with_metrics(&mut net, &data)?;
println!("Training metrics: {:#?}", metrics);

// Profile specific operations
use ruv_fann::profiling::Timer;
let _timer = Timer::new("forward_pass");
let output = net.run(&input)?;

External profiling:

# Use cargo flamegraph
cargo install flamegraph
cargo flamegraph --bin your-app

# Use perf on Linux
perf record --call-graph=dwarf ./target/release/your-app
perf report

๐Ÿ”„ Migration & Compatibility

Q: How do I migrate from the original FANN library?

A: ruv-FANN provides compatibility tools:

// Load existing FANN files
let network = NeuralNetwork::load_fann("old_model.net")?;

// Convert training data
let data = TrainingData::from_fann_file("training.data")?;

// API compatibility layer
#[cfg(feature = "fann-compat")]
use ruv_fann::compat::fann;

let net = fann::fann_create_standard(3, 2, 3, 1);

Migration checklist:

  1. Convert data files using ruv-fann-convert
  2. Update API calls using compatibility layer
  3. Test numerical consistency
  4. Gradually adopt new features

Q: Can I use ruv-FANN with existing Python ML pipelines?

A: Yes, through multiple integration paths:

# Option 1: Python bindings
import ruv_swarm
model = ruv_swarm.NeuralNetwork(10, [64, 32], 1)

# Option 2: WASM in Jupyter
%%js
const wasmModule = await import('./pkg/ruv_fann.js');
await wasmModule.default();

# Option 3: Subprocess interface
import subprocess
result = subprocess.run(['ruv-fann', 'predict', 'model.ruv'], 
                       input=json.dumps(data), text=True)

๐Ÿ“Š Licensing & Commercial Use

Q: Can I use ruv-FANN in commercial projects?

A: Yes! ruv-FANN uses dual licensing:

  • MIT License: Permissive, allows commercial use
  • Apache 2.0 License: Also allows commercial use with patent protection

Choose whichever license works best for your project. Both allow:

  • Commercial use
  • Modification
  • Distribution
  • Private use

Q: Are there any usage restrictions?

A: No significant restrictions:

  • โœ… Use in proprietary software
  • โœ… Sell products using ruv-FANN
  • โœ… Modify the source code
  • โœ… Use in SaaS applications

Requirements:

  • Include license notice in distributions
  • Don't claim to be the original author

Q: Do I need to open source my applications?

A: No. Both MIT and Apache 2.0 are permissive licenses that don't require you to open source your applications. You can build proprietary software using ruv-FANN.

๐Ÿค Community & Contributing

Q: How can I contribute to ruv-FANN?

A: We welcome contributions! Here's how to get started:

  1. Start with GitHub Issues:

    • Look for "good first issue" labels
    • Comment to claim an issue
  2. Use Swarm Contribution System:

# Initialize swarm-powered contribution
npx ruv-swarm contribute --type feature --issue 123

# The swarm will guide you through:
# - Code analysis
# - Implementation suggestions  
# - Automated testing
# - Pull request optimization
  1. Contribution Areas:
    • ๐Ÿ› Bug fixes
    • โœจ New features
    • ๐Ÿ“š Documentation
    • ๐Ÿงช Tests and benchmarks
    • ๐ŸŽจ Examples and tutorials

Q: How do I report bugs or request features?

A: Use our GitHub issue system:

  1. Search existing issues first
  2. Use issue templates for consistency
  3. Include minimal reproduction cases
  4. Provide system information:
# Generate debug information
bash -c "$(curl -s https://raw.githubusercontent.com/ruvnet/ruv-FANN/main/scripts/debug-info.sh)"

Q: Where can I get help or discuss ruv-FANN?

A: Multiple community channels:

๐Ÿ”ฎ Future & Roadmap

Q: What's coming in future versions?

A: Our roadmap includes:

v0.4 (Q2 2024):

  • Enhanced GPU acceleration
  • More pre-trained models
  • Improved Python bindings
  • Mobile device optimization

v0.5 (Q3 2024):

  • Federated learning
  • Advanced swarm topologies
  • Real-time streaming
  • Edge device deployment

v1.0 (Q4 2024):

  • Production stability guarantees
  • Enterprise support options
  • Advanced monitoring
  • Performance SLA commitments

Q: Will ruv-FANN support [specific feature]?

A: Check our public roadmap or create a feature request. We prioritize features based on:

  • Community demand
  • Technical feasibility
  • Alignment with project goals
  • Available development resources

Q: How stable is ruv-FANN for production use?

A: Current stability status:

Core Library (ruv-fann):

  • โœ… Production ready for most use cases
  • โœ… Extensive test coverage (>90%)
  • โœ… API stability guarantees
  • API changes follow semantic versioning

Swarm Intelligence (ruv-swarm):

  • โš ๏ธ Beta quality - API may change
  • โœ… Core functionality stable
  • โš ๏ธ Advanced features experimental

Semantic Cartan Matrix:

  • โš ๏ธ Alpha quality - research project
  • ๐Ÿงช Experimental features
  • ๐Ÿ“š Academic use recommended

๐Ÿ“ž Still Have Questions?

If your question isn't answered here:

  1. Search the documentation
  2. Check GitHub Issues
  3. Ask on Discord
  4. Create a new GitHub Discussion

For urgent issues: Use the Troubleshooting Guide first, then reach out to the community.


This FAQ is updated regularly. Last updated: 2025-01-01

โš ๏ธ **GitHub.com Fallback** โš ๏ธ