Recording and Playback - CESNET/UltraGrid GitHub Wiki

Table of Contents

Usage

You can export video sent from UltraGrid by passing '--record' parameter:

uv -t deltacast -c libavcodec:codec=H.264 --record
# Optionally you can specify also output directory:
uv -t deltacast -c libavcodec:codec=H.264 --record=lecture-20230424
# show options:
uv --record=help

Such video, you can latter send to a remote location:

uv --playback lecture-20230424 <remote_host>

Please note, that the video compression specification is important also for record, because recorded stream is compressed (in contrary to audio record, where PCM WAVE is used). This also preserves storage, because uncompressed video record is usually undesirable.

As a consequence, the playback uses the compression specified to record and cannot be (currently) changed.

Converting exported video

Also, you can process saved images, depending on its compression.

YUV

You can process uncompressed YUV frames like that:

ls | grep '\.yuv$' | sed 's/\.yuv$//' | parallel mv {}.yuv {}.Y  # files need to have .Y extension 

ffmpeg -pix_fmt uyvy422 -s 1920x1080 -i %08d.Y -codec:v huffyuv out.avi

(you can use also "for n in *yuv; do mv $n ${n/yuv/Y}; done" instead of parallel, however you are limited by maximal lenght of command-line arguments, which will however limit the amount of files that can be processed, then)

H.264/HEVC

H.264 or HEVC pictures can be simply concatenated by running:

cat *.h264 > out.h264

or if there are more files (than maximal argument list size), you can use:

:>| out.h264

ls [0123456789]*.h264 | xargs -n 1 sh -c 'cat "$0" >> out.h264'

Then you may convert video to more convenient file format, eg:

ffmpeg -r 30000/1001 -i out.h264 -i sound.wav -codec:v copy -codec:a aac -strict -2 -r 30000/1001 output.mp4

In this example:

  • we are providing explicitly input frame rate (29.97). Otherwise, ffmpeg would guess from the file and since there are no headers, only frame timestamps, it might be incorrect.
  • we are adding the sound recorded by UltraGrid, it is reencoded from PCM to AAC, option "-strict -2" is there because AAC support in FFMPEG is considered experimental
  • output frame rate is there just for sure, it can be omitted

JPEG

And for JPEG, you may use:

mencoder "mf://*.jpg" -mf fps=25 -o output.avi -ovc copy

or (transcoding to H.264, single pass)

mencoder mf://*jpg -audiofile sound.wav -oac copy -ovc x264 -o out.avi -of lavf  -x264encopts bitrate=10000:tff

or (transcoding to H.264, two pass, very high quality)

mencoder mf://*jpg -ovc x264 -x264encopts pass=1:preset=veryslow:bitrate=20000:tff -o /dev/null

mencoder mf://*jpg -audiofile sound.wav -oac copy -ovc x264 -x264encopts pass=2:preset=veryslow:bitrate=20000:tff -o out.avi

Converting foreign media for UltraGrid

RGB

Note: RGB uncompressed video will consume significant amount of storage and you will need to have hi-bandwidth storage (aka SSD) to play it back. If it is not the case, consider storing the record as JPEG.

Convert video to sequence of video frames:

mplayer -vo pnm <video>

Drop PNM headers:

ls | grep '\.ppm$' | sed 's/\.ppm$//' | parallel -q bash -c 'tail {}.ppm -n +4 > {}.rgb'

Finally, create video.info (in same folder) with following content (change size,fps and count):

 version 1
 width 320
 height 240
 fourcc RGB2
 fps 24.00
 interlacing 0
 count 1000

MJPEG

You can also create JPEG files instead to RGB to reduce required disk space. The process is even easier than in previous case - just convert the video into sequence of video images:

mplayer -vo jpeg <video>

And, subsequently, add correct metadata file (called video.info in same directory):

 version 1
 width 320
 height 240
 fourcc MJPG
 fps 24.00
 interlacing 0
 count 1000

Now you should be able to play image sequence by executing:

 uv --playback <directory_with_JPEG_images>

Converting audio for UltraGrid

Recorded audio is stored in the same directory alongside the video files. It needs to be in uncompressed PCM format (signed, little-endian). However, you can convert it quite easily to UG format:

 sox <source> -e signed -b 16 sound.wav

or you can use FFMPEG (eg. if the source is a multimedia conatiner):

ffmpeg -i <source> -acodec pcm_s16le -ar 48000 -ac 2 sound.wav

Direct playback with V4L2 and ALSA (Linux only)

You can also directly playback a multimedia file through v4l2loopback device. Just add a module (it may be needed to compile for you kernel first):

modprobe v4l2loopback

Then new device /dev/videoX appears (further assuming that device index is 2). This device can be written eg. with:

ffmpeg -re -i video.mp4 -f v4l2 /dev/video2

For more advanced setups please refer here and here.

Then you should be able to grab from that V4L2 device in UltraGrid as usual:

uv -t v4l2:device=/dev/video2 <receiver>

Audio

Similarly you may also add audio with ALSA loopback device:

modprobe snd_aloop

Then you can play the media combined through audio and video loopback:

ffmpeg -re -i media.mp4 -f v4l2 /dev/video2 -f alsa hw:CARD=Loopback,DEV=0

and UltraGrid capture will be:

uv -t v4l2:device=/dev/video2 -s alsa:hw:CARD=Loopback,DEV=1 <receiver>

Note: Please note different ALSA device indices in ffmpeg and uv commands (DEV=0 and DEV=1).

Of course audio can be also played without the video.

⚠️ **GitHub.com Fallback** ⚠️