FEC Settings - CESNET/UltraGrid GitHub Wiki

Forward error correction can be turned on on sender, while recognized automatically with receiver. All schemes can be used for video while first one (multiplied stream) is eligible also for audio.

For video prefer Reed-Solomon unless you have high-bitrate stream for which is Reed-Solomon too slow. Then use LDGM, which is faster.

For audio, interleaved multiplied stream is recommended (although Reed-Solomon is also available).

Table of Contents

General

General syntax of FEC is following:

uv -f [A:|V:]<fec>

where:

  • optional A or V letter specifies whether FEC applies to audio or video. If no prefix given, video is assumed (same as V).
  • <fec> - one of FEC schemes introduced below. For audio only suitable scheme is mult (interleaved multipiled stream).

Following rules are recommended for FEC selection:

  1. for audio, if FEC is needed, use prefer interleaved multiplied stream (R-S is supported as well)
  2. for video up to the order of 1 Gbps use Reed-Solomon
  3. if Reed-Solomon is not fast enough, use LDGM

Interleaved multiplied stream

Usually suitable for audio only.

Turned on by

uv ... -f A:mult:3

where 3 is multiplying factor, so audio stream goes three-times.

Reed–Solomon

For streams with bitrate up to hundreds of Mbps it is best to use Reed-Solomon, which is an optimal erasure code (eg. with 25% redundancy is able to recover from 20% loss per frame).

Usage:

uv -f rs -t <capture> -c libavcodec:codec=H.264

uses Reed-Solomon with default parameters, you can also specify parameters of RS directly with following syntax:

uv -f rs:k:n

  • k is the count of source symbols
  • n is count of generated symbols (source + parity, must be >k)
  • k/n gives code-rate of the scheme (the higher ratio between n and k is, the more redundancy is used)
  • in other words, (n-k)/k gives added redundancy

k around 200 is recommended because neither k nor n should exceed 255.

Therefore, following command causes the use of 200 symbols plus 50 redundant symbols per frame (25% redundancy):

uv -f rs:200:250 -t <capture> -c libavcodec:codec=H.264

Note: as for LDGM, the redundancy is given on a per frame basis, it means that if you encounter burst loss (more packets lost in a single frame than the redundancy), it won't be possible to reconstruct. So that UG reported packet loss should be taken only informatively, because it reports average values per 5 second interval.

LDGM

LDGM stands for low-density generator matrix.

Unless Reed-Solomon is not usable, LDGM is less preferred from these reasons:

  1. it is not an optimal erasure code - even if more data than amount of primary frame data is received, it is not guaranteed that the frame will get recovered; this holds especially for low bit-rates

  2. as it also follows from previous point, it is unsuitable to low-bitrate streams

  3. the setting is more complex – in contrary to R-S, not only the ratio between k and m here matters

There are more possibilities to control properties of LDGM:

  • If you are aware of LDGM scheme, you can set its properties directly. The syntax is:

    uv ... -f LDGM:<k>:<m>:<c>

    Where <k> is parity matrix width, <m> matrix height and <c> number of ones per column.

    Basically, k specifies matrix width, m number of redundant lines (m/k specifies redundancy; eg. for default k=256, m=192 it is 75% redundancy (code rate 1/175%=~57%)). c shall be some small value, usually something about 5 will be ok, while a good value for k is in order of hundreds or few thousands (lets say up to 2000).

  • You can use also following syntax:

    uv ... -f LDGM:<p>%

    In that case UltraGrid tries to cover losses up to <p> percent per frame. Please note that this doesn’t guarantee you that it will cover that percent loss – when there are burst losses, you can get significantly higher loss than reported by UG on 5s time frame.

    There are only a few presets for uncompressed or JPEG FullHD formats that will be chosen from. If the stream is eg. H.264 it won’t give good results (and for H.264/H.265 is better to use a Reed-Solomon scheme).

  • Last possibility is not to specify anything:

    uv ... -f LDGM

    In that case, static predefined values are used. Note that this way is useful only in some cases. It has 1/3 redundancy.

Selecting encoding device

By default, CPU is used to compute LDGM parity. This is usually sufficient for lower bitstream (up to uncompressed HD). However, its performance falls behind with higher bitrates. In that case, CUDA implementation of LDGM should be used:

uv -f LDGM --param ldgm-device=GPU -t <capture> <receiver> # sets encoding of LDGM on GPU

In a similar way you can set decoding of LDGM on GPU

uv -d gl --param ldgm-device=GPU <sender>

If you want to set explicitly that encoding/decoding should be performed on CPU (default), you can use option:

uv --param ldgm-device=CPU

Note: The parameter can be independently used for both sender and receiver.

⚠️ **GitHub.com Fallback** ⚠️