Home - kennetholsenatm-gif/q_mini_wasm_v2 GitHub Wiki

q_mini_wasm_v2 Wiki

Complete documentation for the quantum-inspired, ternary AI inference engine

Navigation

  • Getting Started: Quick start, building, and setup guides
  • Architecture: System design, QGNN, MoE routing, and core concepts
  • API Reference: Core API documentation
  • Research: Papers on quantum computing, cognitive ergonomics, and Betti numbers
  • Decisions: Architecture Decision Records (ADRs)

q_mini_wasm_v2: Quantum-Inspired Extreme-Edge AI Framework

A quantum-inspired, highly energy-efficient AI inference engine operating entirely in a ternary GF(3) state space, exploiting the Gottesman-Knill theorem for efficient classical simulability.

Quick Start

Download and run the pre-built executable:

git clone https://github.com/kennetholsenatm-gif/q_mini_wasm_v2.git
cd q_mini_wasm_v2
./qminiwasm.exe

Then open http://localhost:7345 in your browser.

That's it. No build step required for Windows users.

What This Is

q_mini_wasm_v2 is a quantum-inspired AI framework that uses ternary (3-state) computing instead of binary. It implements:

  • Ternary GF(3) Arithmetic: All operations use -1, 0, +1 states (trits) instead of 0, 1 bits
  • Mixture-of-Experts Routing: Sparse expert selection using tropical geometry
  • Forward-Forward Learning: Local, gradient-free learning without backpropagation
  • Quantum Stabilizer Formalism: Efficient classical simulation of quantum operations

The system achieves <0.5 pJ/op energy efficiency for routing and 57% energy reduction over traditional approaches.

Features

  • Ternary Computing: GF(3) arithmetic with 99.06% entropy efficiency
  • Quantum-Inspired: Gottesman-Knill theorem for efficient simulation
  • Graph-Native MoE: O(E) complexity expert routing via QGNN
  • Forward-Forward Learning: Teacherless self-supervised learning
  • Web-Based UI: Built-in WUI at localhost:7345 for training and inference
  • Live Data Integration: Automatic data acquisition from academic APIs (NASA, PubChem, OEIS, etc.)

Performance

System Latency (us) Energy (pJ/op) Speedup Memory Reduction
Array-Based (32 experts) 45 1.2 1.0x Baseline
Graph-Native (32 experts) 28 0.7 1.6x 90%
Graph-Native (243 experts) 65 0.9 2.8x 95%

Documentation

See the docs/ directory for detailed documentation:

Building from Source

See docs/guides/building.md for build instructions. Requires C++20 compiler and CMake 3.20+.

License

MIT License - see LICENSE file.

Contact