Home - ruvnet/ruv-FANN GitHub Wiki
What if intelligence could be ephemeral, composable, and surgically precise?
Welcome to ruv-FANN, a comprehensive neural intelligence framework that reimagines how we build, deploy, and orchestrate artificial intelligence. This repository contains groundbreaking projects that work together to deliver unprecedented performance in neural computing, forecasting, and multi-agent orchestration.
We believe AI should be:
- Ephemeral: Spin up intelligence when needed, dissolve when done
- Accessible: CPU-native, GPU-optional - built for the GPU-poor
- Composable: Mix and match neural architectures like LEGO blocks
- Precise: Tiny, purpose-built brains for specific tasks
This isn't about calling a model API. This is about instantiating intelligence.
1. ruv-FANN Core - The Foundation
A complete Rust rewrite of the legendary FANN (Fast Artificial Neural Network) library. Zero unsafe code, blazing performance, and full compatibility with decades of proven neural network algorithms.
Key Features:
- 100% memory-safe implementation in pure Rust
- Full compatibility with original FANN file formats
- SIMD-accelerated operations (2-4x performance boost)
- WebAssembly support for browser deployment
- Comprehensive API for building custom neural architectures
2. Neuro-Divergent - Advanced Neural Forecasting
27+ state-of-the-art forecasting models (LSTM, N-BEATS, Transformers) with 100% Python NeuralForecast compatibility. 2-4x faster, 25-35% less memory.
Key Features:
- Drop-in replacement for Python NeuralForecast
- 27+ neural forecasting models
- 2-4x training speed improvement
- 25-35% memory reduction
- Production-ready with enterprise features
3. ruv-swarm - Ephemeral Swarm Intelligence
The crown jewel. Achieves 84.8% SWE-Bench solve rate, outperforming Claude 3.7 by 14.5 points. Spin up lightweight neural networks that exist just long enough to solve problems.
Key Features:
- 84.8% SWE-Bench accuracy (industry-leading)
- Multi-agent orchestration with 5 topologies
- 27+ neuro-divergent models working in harmony
- Real-time coordination with <100ms decisions
- Claude Code MCP integration
4. CUDA-WASM - GPU Acceleration Everywhere
Revolutionary transpiler that converts CUDA code to WebAssembly and WebGPU, enabling GPU-accelerated computing in web browsers with near-native performance.
Key Features:
- CUDA to WebAssembly transpilation
- WebGPU support for browser GPU acceleration
- 85-95% of native CUDA performance
- No NVIDIA dependencies required
- Works in browsers, Node.js, and edge devices
5. Neuro-Synaptic Simulator - Hardware Neural Networks
Advanced simulator for neuromorphic computing architectures, enabling hardware-accelerated neural processing with brain-inspired computing paradigms.
Key Features:
- Spiking neural network simulation
- Event-driven architecture
- Energy-efficient computing models
- Hardware co-design capabilities
- Real-time neural dynamics
6. Semantic Cartan Matrix - Mathematical Neural Architecture
Revolutionary approach to neural networks using Cartan matrices from Lie algebra theory for mathematically principled attention mechanisms.
Key Features:
- Orthogonal semantic relationships preservation
- 2.8-4.4x SIMD performance improvements
- Cartan matrix-constrained attention
- 32-dimensional root space transformations
- Mathematical guarantees for stability
7. OpenCV-Rust - Computer Vision Integration
Complete computer vision library with FANN integration, providing memory-safe OpenCV functionality with neural network pipelines.
Key Features:
- Full OpenCV 4.x API compatibility
- Zero unsafe code with Rust safety
- CUDA acceleration support
- WebAssembly deployment
- Integrated ML pipelines with FANN
8. DAA-Repository - Distributed Autonomous Agents
Infrastructure for distributed autonomous agent systems with self-organizing capabilities and emergent intelligence patterns.
Key Features:
- Self-organizing agent networks
- Distributed consensus protocols
- Emergent behavior patterns
- Fault-tolerant coordination
- Scalable to thousands of agents
9. DAA-Swarm - Swarm Coordination Layer
High-level swarm coordination and orchestration layer built on top of DAA-Repository for complex multi-agent scenarios.
Key Features:
- Advanced swarm topologies
- Inter-swarm communication
- Task distribution algorithms
- Load balancing and optimization
- Real-time swarm visualization
10. Coordination - System-Wide Coordination
Central coordination layer that manages all components, ensuring seamless integration and optimal resource utilization.
Key Features:
- Component lifecycle management
- Resource allocation optimization
- Inter-component communication
- Performance monitoring
- Fault recovery mechanisms
11. Neural DNA - Evolutionary Neural Networks
Genetic algorithms and evolutionary strategies for neural architecture search and optimization.
Key Features:
- Automated neural architecture search
- Genetic optimization algorithms
- Population-based training
- Neuroevolution strategies
- Self-modifying networks
12. Memory - Distributed Memory Systems
Advanced memory management and caching systems for distributed neural processing.
Key Features:
- Distributed cache coherence
- Memory-efficient data structures
- Persistent neural states
- Cross-component memory sharing
- Garbage collection optimization
13. WASM - WebAssembly Runtime
Optimized WebAssembly runtime for neural network execution in browsers and edge devices.
Key Features:
- SIMD128 acceleration
- Memory64 support
- Streaming compilation
- Module caching
- Cross-platform compatibility
14. Docker - Containerized Deployment
Production-ready Docker configurations for all components with orchestration support.
Key Features:
- Multi-stage optimized builds
- Kubernetes configurations
- Docker Compose setups
- Health monitoring
- Auto-scaling support
15. Examples - Comprehensive Examples
Extensive collection of examples demonstrating all features and integration patterns.
Key Features:
- Getting started tutorials
- Integration examples
- Performance benchmarks
- Real-world use cases
- Best practices demonstrations
Metric | ruv-FANN | Industry Standard | Improvement |
---|---|---|---|
SWE-Bench Solve Rate | 84.8% | 70.3% (Claude 3.7) | +14.5pp |
Token Efficiency | 32.3% less | Baseline | Best |
Speed (tasks/sec) | 3,800 | ~860 | 4.4x |
Memory Usage | 29% less | Baseline | Optimal |
Neural Training | 2-4x faster | Python baseline | 2-4x |
Inference Latency | <100ms | ~500ms | 5x |
CUDA Transpilation | 85-95% | Native CUDA | Near-native |
WebAssembly Performance | 90% | Native | Excellent |
# NPX - No installation required!
npx ruv-swarm@latest init --claude
# NPM - Global installation
npm install -g ruv-swarm cuda-wasm
# Cargo - For Rust developers
cargo install ruv-swarm-cli
# Add to Rust project
cargo add ruv-fann neuro-divergent ruv-swarm semantic-cartan-matrix
use ruv_fann::prelude::*;
// Create a simple neural network
let mut nn = NeuralNetwork::builder()
.input_neurons(2)
.hidden_layer(3)
.output_neurons(1)
.activation_function(ActivationFunction::Sigmoid)
.build()?;
// Train XOR function
let training_data = vec![
(vec![0.0, 0.0], vec![0.0]),
(vec![0.0, 1.0], vec![1.0]),
(vec![1.0, 0.0], vec![1.0]),
(vec![1.0, 1.0], vec![0.0]),
];
nn.train(&training_data, 1000, 0.01)?;
# Transpile CUDA to WebAssembly
npx cuda-wasm transpile matrix_multiply.cu -o matrix_multiply.wasm
# Run in browser
npx cuda-wasm serve matrix_multiply.wasm
use ruv_swarm::prelude::*;
// Initialize swarm
let mut swarm = Swarm::builder()
.topology(TopologyType::Hierarchical)
.max_agents(5)
.cognitive_diversity(CognitiveDiversity::Balanced)
.build()
.await?;
// Solve complex task
let solution = swarm.orchestrate_task()
.description("Fix Django ORM bug #12708")
.execute()
.await?;
- Installation Guide - Platform-specific setup
- Quick Start Guide - 5-minute tutorial
- Getting Started Guide - Comprehensive introduction
- ruv-FANN Core Documentation
- Neuro-Divergent Guide
- ruv-swarm Manual
- CUDA-WASM Transpiler
- Neuro-Synaptic Simulator
- System Architecture - Overall system design
- Technical Architecture - Deep technical details
- Design Patterns - Architectural patterns
- Component Overview - All components detailed
- API Reference - Complete API documentation
- CLI Tools - Command-line interfaces
- Code Examples - Practical implementations
- Integration Guides - Platform integration
- Performance Benchmarks - Detailed comparisons
- Neural Networks - Deep learning architectures
- Swarm Intelligence - Multi-agent systems
- Distributed Computing - Scaling strategies
- Production Deployment - Enterprise deployment
- Docker Deployment - Container orchestration
- Monitoring & Metrics - Observability
- Security Guide - Security best practices
- Scalability - Scaling strategies
- CUDA-WASM - GPU acceleration in browsers
- Quantum Integration - Quantum computing
- Edge Computing - Edge deployment
- Autonomous Agents - Self-organizing systems
- Contributing - How to contribute
- Community Resources - Forums and support
- Troubleshooting - Common issues
- FAQ - Frequently asked questions
- 84.8% SWE-Bench - Best-in-class problem solving
- 32.3% Token Reduction - Most efficient LLM usage
- 4.4x Performance - Fastest neural execution
- Zero Unsafe Code - 100% memory safety
- First Rust FANN - Complete safe reimplementation
- CUDA to WASM - GPU acceleration everywhere
- Cognitive Diversity - 27+ neural models in harmony
- Ephemeral Intelligence - On-demand neural networks
- Mathematical Rigor - Cartan matrix foundations
We use an innovative swarm-based contribution system powered by ruv-swarm itself!
# Fork and clone
git clone https://github.com/your-username/ruv-FANN.git
cd ruv-FANN
# Initialize contribution swarm
npx ruv-swarm init --github-swarm
# Let the swarm guide your contribution
npx ruv-swarm contribute --type "feature|bug|docs"
See our Contributing Guide for details.
- Ocean(@ohdearquant) - Transformed FANN from mock to real neural networks
- Bron(@syndicate604) - Made JavaScript/WASM integration production-ready
- Jed(@jedarden) - Platform integration and scope management
- Shep(@elsheppo) - Testing framework and quality assurance
- FANN - Original Fast Artificial Neural Network library
- NeuralForecast - Forecasting model inspiration
- Claude MCP - Model Context Protocol
- Rust WASM - WebAssembly toolchain
Dual-licensed under:
- Apache License 2.0 (LICENSE-APACHE)
- MIT License (LICENSE-MIT)
Choose whichever license works best for your use case.
Built with ❤️ and 🦀 by the rUv team
Making intelligence ephemeral, accessible, and precise
Website • Documentation • Discord • Twitter