[OLD] Picking Between chaiNNer and enhancr - Sirosky/Upscale-Hub GitHub Wiki

enhancr or chaiNNer?

Both enhancr and chaiNNer support video inference. However, chaiNNer is more of a multi-purpose image and video utility rather than a dedicated video upscaling application. enhancr is dedicated to video upscaling, interpolation and restoration. As a result, it is substantially faster than chaiNNer-- even in its free tier. The paid tier, unlocked at $7.50 per month on Patreon, supports tensorrt inference. I won't bore you with the technical mumbo jumbo of tensorrt-- it is effectively just a massive speedboost (think 5x faster or more) for nvidia GPUs. So if you have a lot of episodes of the Simpsons to get through, and have a nvidia GPU and don't mind shelling out some cash, then enhancr is hands-down the far better solution. This guide won't cover setting up enhancr, as fulsome documentation is available here.

That being said, despite being slower, chaiNNer is still a great option because of its broad utility. It also supports a wider array of models than enhancr. If you want to upscale individual images, manga/comics, Stable Diffusion or Midjourney output etc., chaiNNer is the way to go.

Comparison Table

chaiNNer enhancr (free) enhancr (paid)
Price Free Free $7.50/month
Upscale Videos
Upscale Images/Image Sequences
Video Interpolation
Video Inference Speed 🔥 (via Pytorch) 🔥🔥🔥 (via DirectML or NCNN) 🔥🔥🔥🔥🔥 (via tensorrt)
Inference Frameworks Supported Pytorch, ONNX, NCNN ONNX, NCNN, DirectML Pytorch, NCNN, DirectML, tensorrt
Notable Architectures NOT Supported* DITN, CUGAN/Shuffle Cugan** DITN, DAT, OmniSR, SRFormer, SwinIR (partial support) DITN, DAT, OmniSR, SRFormer, SwinIR (partial support)
Output Containers mkv, mp4, mov, avi, image sequence mkv, mp4, webm, mov, image sequence mkv, mp4, webm, mov, image sequence
Codecs Supported for Export 264, 265, VP9, FFV1 264, 265, VP9, FFV1, AV1, ProRes 264, 265, VP9, FFV1, AV1, ProRes
UI Style Node-based Tab-based (should be familiar to most people) Tab-based (should be familiar to most people)

* It should be noted that the vast majority of the public models are trained on ESRGAN and Compact anyway. The architectures not supported by enhancr are newer / more novel archs that aren't widely trained on at this time.

** chaiNNer supports these architectures via ONNX runtime, though ONNX is typically slower.