Home - AmusementClub/vs-mlrt GitHub Wiki

Welcome to the vs-mlrt wiki!

The goal of the project to provide highly-optimized AI inference runtime for VapourSynth.

Runtimes

  • vs-ov: OpenVINO Pure CPU AI Inference Runtime
  • vs-ort: ONNX Runtime based CPU/CUDA AI Inference Runtime
  • vs-trt: TensorRT based CUDA AI Inference Runtime

Models

The following models are available:

Device-specific benchmarks