videofilt_creatingtimevaryingfilters - shekh/VirtualDub2 GitHub Wiki
VirtualDub Plugin SDK 1.2
Creating time-varying filters
Many video filters perform the same operation on a video frame regardless of where it lies in the video stream, but some filters change operation over time depending on the frame number, notably filters that overlay additional graphics or text on top of the video based on an external script.
While it is possible to count frame numbers in runProc
in the filter
alone, this is unreliable. A better way is to use the FilterStateInfo
structure, pointed to by the fa->pfsi
pointer, which indicates the
timing for the current frame.
FilterStateInfo
is divided into two sets of fields, those for the
source video and those for the output video. The difference depends
upon how the filter should behave with regard to edits on the timeline
in the host. If the filter's behavior is tied to the source — that is,
it needs to track the original video or audio content — then it should
use the source frame numbers. If the filter needs to track post-edit
frame numbers instead, such as applying a fade at a predetermined time,
then it should use output frame numbers instead.
Note: The pfsi
field may be NULL in some host operating modes or
when no input video is present when certain filter API calls are issued.
If a filter requires this field, it should check its value in
startProc
and runProc
and report an error if it is absent instead of
crashing.
V12+ only: The lMicrosecsPerFrame
and lMicrosecsPerSrcFrame
fields have limited precision. Use the mFrameRateLo
and mFrameRateHi
fields of fa->src
instead for better precision.
When multiple filters are involved, the interaction between different
filters and the fields in the FilterStateInfo
structure can be
difficult to follow. Here is an example, where we have a blur filter
followed by a bob doubler filter, and finally some edits on the
timeline:
Here we have a blur filter that processes frames 1:1, a bob doubler which produces two output frames for each source frame, and edits in the timeline to extract frames 16-17 and 20-21.
The keys to remember are that:
- Source and output frame numbers seen by a filter are always the same
unless the filter changes frame ordering via
prefetchProc
. - The output frame numbers for a filter become the source frame numbers for the next filter.
- Destination frame numbers always correspond to the final output and are the same for all filters.
Which frame number you use depends on what you intend to do with it. If you are displaying a running timestamp, you probably want to use the destination frame number. If you're pulling data from an external source like a script, you likely want to use the source frame number. Finally, if you're making a filter like the built-in bob doubler and produce two fields for each source frame, you should use the output frame number to tell which field to output for a given source frame.
Note: Although frames can be reordered starting at V12, making the source and output frame numbers different, the output frame number is only available in API V13.
For filters that have profiling and rendering passes or otherwise have
complex behavior, it may be useful to determine whether the host is
running in preview or rendering mode. This is done via the flags
field
of the FilterStateInfo
structure, which is only present in API V10 or
above. This flag field also indicates if real-time mode is active, where
the filter may want to apply lower-quality approximations for speed.
VirtualDub specific: The kStateRealTime
flag is set in capture
mode.
Copyright (C) 2007-2012 Avery Lee.