📺 How to Upscale a Video - Sirosky/Upscale-Hub GitHub Wiki

🏠 Introduction

This guide is a beginner's introduction on video upscaling, and will focus on the easiest way to get started with video upscaling. This assumes that you already have a model picked out already, and a video you would like to upscale. If you need help downloading or picking a model, check out this guide.

Note that while chaiNNer and enhancr both have video upscaling capabilities, we are using JaNaiGUI here because it is significantly faster than chaiNNer at videos and free (whereas enhancr is freemium). This makes it the best option for video upscaling if you need a user interface.

📜 Instructions

  1. Install AnimeJaNaiConverterGui, which is a free and open source app for upscaling and interpolating videos. Follow the link and download the latest release (either portable and installer).
  2. Launch the app and you will see the following, which is hopefully self-explanatory:  

AnimeJaNaiConverterGui_ncValIVGht

  1. Select the video you would like to upscale.
  2. Select where you want the video to be outputted to.
  3. Load the ONNX file. If the model you downloaded is in PTH format, simply follow the PTH to ONNX conversion guide.
  4. Select the backend. For recent NVIDIA GPUs, you want to select TensorRT. Everything else (older NVIDIA GPUs or AMD GPUs), try DirectML first. TensorRT is extremely fast, but limited to fairly new NVIDIA GPUs. I don't know what the exact cutoff is, but 2000s, 3000s and 4000 GPUs definitely support TensorRT.
  5. Click upscale, and the process should begin. Note that for TensorRT, JaNai GUI will build an engine file first. This engine file is GPU-specific (i.e., you can't just build one and share it with a friend), and make take up to 20-3 minutes depending on your specifications and the size of the model. However, this process only happens once for the model-- once you've built an engine for a model, you won't need to do it again the next time you use the model.

👀 TensorRT for AniSD and Other Custom Models (Advanced)

TensorRT is NVIDIA software library designed to optimize AI models on most NVIDIA GPUs. Some models can see up to a 30x uplift in speed over plain old Pytorch inference. As discussed above, JaNai GUI supports TRT inference. This portion of the guide will cover advanced usage of JaNai GUI to maximize TRT performance and compatibility for models.

Basics of TRT Inference

First, you need a model that can be converted from PTH into ONNX. Most models can be converted to ONNX-- see the guide here. However, some, such as those based on the CRAFT architecture are completely incompatible with ONNX. This in turn means no TRT inference.

Then, the ONNX file will need to be converted into an engine file, which is essentially an optimized version of the model. Some architectures may be compatible with ONNX, but then aren't compatible with the engine conversion process-- this means no TRT inference for them.

Now that you know the basics, let's jump into the setup process.

Setup

As of time of writing, JaNai GUI's engine building isn't completely working-- some architectures which actually do work with TRT can't convert into engines. And as discussed above, no engine = no TRT inference. However, there is a workaround! We can bypass JaNai's automated engine building by building it manually ourselves. It sounds daunting, but this guide will walk you through step by step.

  1. First, download JaNai GUI 0.0.6, which can be found here. Specifically, you want AnimeJaNaiConverterGui-0.0.6-Portable.7z, which is the portable version. Once downloaded, extract into a directory of your choice. Why are we using 0.0.6? Because engine building was changed in subsequent versions of the app, which eliminates the workaround I mentioned above.

  2. Now, you will want to upgrade VSMLRT. VSMLRT is the backend of JaNai GUI which is the meat and potatoes that actually allows for TRT inference. This version of JaNai GUI is using an older version of VSMLRT. Grab the newest one from the Github releases page. As of time of writing, this is v14.test4. You'll want the version that says vsmlrt-windows-x64-cuda.[versionnumber].7z-- it's the largest one in terms of filesize, the file should be close to 2GB. Extract the files somewhere.

  3. Now, go back to your JaNai GUI folder, and navigate to JaNai Base Directory\mpv-upscale-2x_animejanai. There, you should see a vsmlrt.py file. Replace that vsmlrt.py file with the version from the VSMLRT zip you downloaded in step 2.
    7zFM_c4ye9JoFst

  4. Then, navigate to JaNai Base Directory\mpv-upscale-2x_animejanai\vapoursynth64\plugins, and drag in the DLL files screenshotted below. This should also replace a few additional files in the plugins folder.
    7zFM_4rpDcJPKIB

  5. Then, while still being in the plugins folder, also drag in the vsmlrt-cuda and vsort folders from the VSMLRT zip, as shown below. This should replace the existing items in the folder. NOTE: It seems like the vsov folder may be needed as well. Just to be safe, drag in the vsov folder as well. 7zFM_Kr74vJKiIR

  6. Done! With this, you should be on the newest version of VSMLRT. Let's get over to building engines.

Engine Building

  1. This will assume you have an ONNX model ready on hand-- if not, grab one from the releases page.
  2. Make sure you understand what architecture the model is trained on-- 95% of the time, it will be indicated in the filename. JaNai GUI currently correctly supports engine building for Compact, SPAN and ESRGAN. If you're just looking to use models trained on one of those archs, you can simply go back to the GUI and load the ONNX. The engine conversion should have no problems. However, if you're looking to use models from archs like SwinIROmniSR or DAT2, read on!
  3. Navigate to JaNai Base Directory\mpv-upscale-2x_animejanai\vapoursynth64\plugins\vsmlrt-cuda. There, you should see a file called trtexec.exe. Copy the ONNX file into this folder.
  4. Go to the address bar, and type in cmd. Hit enter. It should launch command prompt.

explorer_mBGUWY2t30

  1. Now, you'll want to run the command to convert to engine. Here are a few sample commands:

SwinIR

SwinIR only works with static axes ONNX, such as the ones provided with AniSD. FP16 engine provides optimal speeds-- see below for an example command. trtexec.exe --fp16 --onnx="2x_AniSD_AC_G6i2b_SwinIR_117500_256x320_fp32.onnx" --saveEngine=2x_AniSD_AC_G6i2b_SwinIR_117500_256x320_fp32.engine --inputIOFormats=fp32:chw --outputIOFormats=fp32:chw --tacticSources=+CUDNN --skipInference

Note that the tile size used by the ONNX file cannot exceed the dimensions of the video. In the example above, the tile size is 320x256-- which is much smaller than most video files. A good rule of thumb is that the ONNX file's tile size should be half of the video file size for optimal speeds (at least when I tested on my system)

This command should also work for OmniSR, but you may need to disable the --fp16 flag.

DAT2

DAT2 doesn't require static ONNX, but does require static engine. You can use a command like this:

trtexec.exe --bf16 --onnx="2x_AniSD_DC_DAT2_97500_fp32FO.onnx" --optShapes=input:1x3x480x640 --saveEngine=2x_AniSD_DC_DAT2_97500_fp32FO.engine --inputIOFormats=fp32:chw --outputIOFormats=fp32:chw --tacticSources=+CUDNN --skipInference

Note the use of --bf16. FP16 engine does not work with DAT2. You can customize --optShapes to your target resolution-- in the example, it's targeted at 640x480.

RealPLKSR

RealPLKSR works with dynamic engine and ONNX. Make sure to use FP16 engine, otherwise it'll be slower than using it with DML inference (at which point, might as well just use DML).

trtexec.exe --fp16 --onnx="2x_AniSD_AC_RealPLKSR_127500_fp32_FO_dynamic_FP16e.onnx" --optShapes=input:1x3x480x640 --saveEngine=2x_AniSD_AC_RealPLKSR_127500_fp32_FO_dynamic_FP16e.engine --inputIOFormats=fp32:chw --outputIOFormats=fp32:chw --tacticSources=+CUDNN --skipInference

[!TIP] Real PLKSR also supports DML inference, which isn't as fast as TRT inference, but is still a reasonably fast option. See below for further information on DML inference.


  1. When you run the command, you should see a bunch of text spew out. Don't panic if it prints a bunch of errors-- all that matters is that you get a .engine file at the end. Be patient-- the engine building process can take up to 30 minutes.
  2. When the engine building process is completed, you should see a .engine file next to your ONNX file. Drag both files into JaNai Base Directory\mpv-upscale-2x_animejanai\vapoursynth64\plugins\models\animejanai, so that the ONNX and engine files are both in the folder.
  3. Open JaNai, and navigate to ONNX Model Path. Click Select File as shown in the earliest section of the guide. You should see the ONNX file you dragged in earlier. Select it.
  4. Load videos and tweak whatever other settings you need to change. Then, hit Upscale. It should skip the engine building process and load the engine you dragged in, and start inference!
  5. Enjoy your boosted inference speed!

👀 DirectML for AniSD and Other Custom Models (Easy)

DirectML is an alternative inference backend such as NCNN and Pytorch. To use DML, simply select DirectML in the Upscaling Backend options in JN GUI (marked as #4) and load in an ONNX as you normally would.

While TensorRT is the fastest inference backend for many archs, DML remains a good option for those without nvidia GPUs. It's worth noting that RealPLKSR is also surprisingly quick with DML.

DML can take dynamic ONNX, which makes it more accessible (think any ONNX converted in chaiNNer) for some archs. However, DML does not appear to work with SwinIR and DAT.