Creating video from PAL decodes - happycube/ld-decode GitHub Wiki

Overview

The following is a series of notes about how to use FFmpeg to convert the output from ld-decode (and ld-chroma-decoder) into usable video that can be watched using any compatible video player such as VLC or MPC.

Information is compiled from a number of sources; however the project would like to thank Stephen Neal for his valuable advice on this subject.

PAL video formats

The following text uses the EBU standard descriptions for video formats wherever possible. So PAL SD interlaced video is written as 576/i25 - whereas (in earlier descriptions) it was typically described as 576/50i.

In the EBU notation, if the letter is first then the following number is always frames. If the letter is last then the number is either fields (for interlaced video) or frames (for progressive video).

So, for example, PAL SD progressive film source is 576p25 2:2-ed in a 576i25 frame - but that would have been described as 576/25p in a 576/50i signal in the old format.

PAL SD progressive deinterlaced is 576p50 (which isn't a broadcast standard outside of Australia...) however, 576p50 is a lot easier for computers to handle than 576i25 as it removes the need for them to deinterlace - which many PC based players often can't do, or don't do well.

Export Tool

There is now the TBC-Video-Export Python script which has pre-made FFmpeg profiles and allows easy exporting of CVBS & Y/C TBC files and encoding the chroma-decoded output to video files on Linux/Windows/MacOS.

Conversion of PAL video using FFmpeg

To convert the output of ld-chroma-decoder to playable video, you need to do two or three things in FFmpeg.

  1. Tell FFmpeg about the input format (only if not using -p y4m)
  2. Tell FFmpeg if/how you want to process the video (deinterlace, 3:2 removal etc.)
  3. Tell FFmpeg the required output format

These three subjects are covered in more detail below.

Tell FFmpeg about the input format

ffmpeg -f rawvideo -r 25 -pix_fmt rgb48 -s 928x576 -i "input.rgb" This will take a PAL 'input.rgb' source - tell it the source is raw video, at 25 frames per second, in RGB48 format and 928x576 resolution.

ffmpeg -i "input.y4m" This will take a PAL 'input.y4m' source and automatically determine the framerate, pixel format, and resolution.

Tell FFmpeg if/how you want to process the video

-filter:v "w3fdif=complex:all" will do a BBC R&D Weston 3-field deinterlacing W3FDIF.

Deinterlacing is the process of taking the 576i25 (i.e. 50 x 288 line fields sampled 1/50th second apart) and create a 576p50 (i.e. 50 x 576 line frames) Complex forces it to use the complex VT co-efficients, all forces it to deinterlace all frames (not just those flagged as interlaced - which the rawvideo won't be). This is only required if you want to avoid your display solution deinterlacing.

There is also the newer hybrid deinterlacing filter BWDIF (Bob Weaver Deinterlacing Filter) -vf bwdif=1:0:0 combining elements from yadif & w3fdif.

There is also more modern de-interlacers like QTGMC, which is an AviSynth/VapourSynth script and is not available in FFmpeg, but used with ease in tools like StaxRip & Hybrid.

If you want to scale to a standard resolution you can do this with -vf "scale=768:576" or you can do a pad and scale to 720 x 576 standard SD video resolution (TBC as I need to work out exact options to this as 4:3 analogue video should be in the central 702x576 area within a 720x576 frame, but a quick and dirty would be to ignore the 9 samples of blanking each side and scale to 720x576 using -vf "scale=720:576"

If you keep things interlaced then you may need a -vf "scale=interl=1" in the path to ensure 4:2:0 interlaced-aware chroma scaling.

Tell FFmpeg the required output format

-pix_fmt yuv420p -vcodec libx264 -crf 0 -aspect 768:576 'colourbars.mp4'

H.264/AVC using the x264 encoder is a good solution for playback on consumer devices (it's the Blu-ray format and used for HDTV in most of Europe). For compatibility with consumer video devices you need to go from 4:4:4 16-bit to 4:2:0 8-bit - which -pix_fmt yuv420p will do.

Note for UHD Blu-ray players and modern devices, 4:2:0 10-bit encoding can be used with -pix_fmt yuv420p10.

-c:v libx264 will signal to use the x264 encoder. -crf = constant rate factor. 0 is mathematically lossless, 18 is deemed near-transparent and visually close-to-lossless. Default is 23. (Order of magnitude - decreasing crf by 6 doubles file size, increasing crf by 6 halves file size approx)

For graphics card accelerated encoding use -hwaccel for AMD and Nvidia -hwaccel cuda at the start of your command just after ffmpeg.

-aspect 768:576 This flags the display aspect ratio as square pixel 'PAL' but leaves the video as 928x576 within the codec. It's then up to the player to handle the scaling. I keep the vertical resolution the same to avoid a vertical scale. Effectively the two different figures let the player calculate the pixel aspect ratio - as it is non-square for most SD video formats - 4fSC and Rec.601 4:3 or 16:9.

If you haven't deinterlaced to 50p with a deinterlacer and want native interlaced output:

-pix_fmt yuv420p -vcodec libx264 -crf 0 -flags +ildct+ilme -aspect 768:576 'colourbars.576i.mp4'

The -flags +ildct+ilme force x264 to encode native interlaced (using interlaced DCT and motion estimation) rather than progressive.

Complete FFmpeg examples

You want to edit input.tbc and output.xxx on each command for your respective input and output file names.

Encode without deinterlacing

To encode mp4 video without deinterlacing (note that the '-top 1' switch tells FFmpeg that the video is first field first; this hint is required for players to correctly de-interlace the video during playback. If the original LaserDisc is second field first, this switch should be removed) :

ffmpeg -i "input.y4m" -pix_fmt yuv420p -top 1 -vcodec libx264 -crf 18 -flags +ildct+ilme -aspect 768:576 "output.576i25.mp4"

To add PCM analogue audio to the encoding add the following parameters before the initial -f in the command above:

-f s16le -ar 44.1k -ac 2 -i input.pcm

To chroma decode the .tbc file and combine analogue sound (pcm) with the video, use the following command line:

ld-chroma-decoder --decoder transform3d input.tbc -p y4m | ffmpeg -f s16le -ar 44.1k -ac 2 -i input.pcm -i - -pix_fmt yuv420p -vcodec libx264 -crf 18 -flags +ildct+ilme -aspect 768:576 output.576i25.mp4

To export a more practical codec for editing or post production you can encode to ProRes HQ.

ld-chroma-decoder --decoder transform3d -p y4m -q input.tbc | ffmpeg -i - -c:v prores -profile:v 3 -vendor apl0 -bits_per_mb 8000 -quant_mat hq -f mov -top 1 -vf setfield=tff -flags +ilme+ildct -pix_fmt yuv422p10 -color_primaries bt470bg -color_trc bt709 -colorspace bt470bg -color_range tv -vf setdar=4/3,setfield=tff OUTPUT.mov

Encode with deinterlacing

ffmpeg -i "input.y4m" -vf "w3fdif=complex:all" -pix_fmt yuv420p -vcodec libx264 -crf 18 -aspect 768:576 "output.576p50.mp4"

Decode the .tbc file, deinterlace with w3fdif, and combine analogue sound (pcm) with the video:

ld-chroma-decoder -p y4m --decoder transform3d input.tbc | ffmpeg -f s16le -ar 44.1k -ac 2 -i input.pcm -i - -filter:v "w3fdif=complex:all" -pix_fmt yuv420p -c:v libx264 -crf 18 -aspect 768:576 "output.576p50.mp4"

Decode the .tbc file, deinterlace with bwdif, and combine analogue sound (pcm) with the video:

ld-chroma-decoder -p y4m --decoder transform3d input.tbc| ffmpeg -f s16le -ar 44.1k -ac 2 -i input.pcm -i - -filter:v "bwdif=1" -pix_fmt yuv420p -c:v libx264 -crf 16 -flags +ildct+ilme -aspect 768:576 output.576p50.mp4

Online Usage

For upload to YouTube (Deinterlaced Files Only) it is recommended to scale the video to 4:3 1536x1152 up to 5760x4320 to prevent YouTube's own re-encoding from badly degrading the image quality which it does to lower resolution and bitrate files lower then 1440p this will create large files however the resulting playback quality after upload is better then more lossy options.

ld-chroma-decoder --decoder transform3d -p y4m -q input.tbc | ffmpeg -i - -f s16le -ar 44.1k -ac 2 -c:v prores -profile:v 3 -vendor apl0 -bits_per_mb 8000 -quant_mat hq -f mov -top 1 -pix_fmt yuv422p10 -color_primaries bt470bg -color_trc bt709 -colorspace bt470bg -color_range tv -vf bwdif=1:0:0 -vf scale=1536x1152:flags=lanczos -aspect 768:576 output_ProRes_HQ_YT.mov

The .mov container is used for compliance as the codec used is ProRes HQ which is natively supported by YouTube however if you do not need to edit or are storing the file long term you are better off using the .mkv container which is harder to damage and easyer to stream/upload.

For platforms that don't re-encode their uploaded files like Odysee, you can use a lower bitrate file with this re-encoding script:

ld-chroma-decoder --decoder transform3d -p y4m -q input.tbc | ffmpeg -i - -f s16le -ar 44.1k -ac 2 -c:v libx264 -bufsize 16000k -crf 20 -maxrate 8000k -movflags +faststart -pix_fmt yuv420p -color_primaries bt470bg -color_trc bt709 -colorspace bt470bg -color_range tv -vf bwdif=1:0:0 -vf -aspect 768:576 output_web.mov

Bilingual or dual-mono sound

Some LaserDiscs provide bilingual/dual-mono sound. This is where the stereo audio is two independent mono tracks. To map this correctly through FFmpeg use a command similar to the following:

ffmpeg -i stereo.wav -map_channel 0.0.0 left.wav -map_channel 0.0.1 right.wav