videofilt_settingfilterparameters - shekh/VirtualDub2 GitHub Wiki
VirtualDub Plugin SDK 1.2
Setting filter parameters
By implementing the paramProc method, a filter can inform the host of its operating parameters, including the relationship between the input and output frame sizes, frame delays, and buffer overlap behavior.
The simplest paramProc
just returns a code:
long paramProc(VDXFilterActivation *fa, const VDXFilterFunctions *ff) {
return FILTERPARAM_SWAP_BUFFERS;
}
The return value is a bitfield indicating the behavior of the filter.
paramProc
can specify a lot of parameters, but all of them have
defaults so that most filters only have to do minimal work here. In this
case, the filter is specifying that its source and outputs have the same
frame size, frame rate, length, aspect ratio, and format, and that it
copies from one buffer to another. This configuration will work for most
filters.
The first option that filters can choose to alter is the
FILTERPARAM_SWAP_BUFFERS
bit on the return value. This indicates
whether the filter works in-place on a video frame, or whether it copies
from one frame to another. Whichever one is better depends on the image
processing that your filter does in runProc
. For some algorithms, it
is most convenient to read pixels from a buffer, modify them, and write
them back to the same buffer. In that case, the filter should operate
in-place and return with the FILTERPARAM_SWAP_BUFFERS
bit cleared.
Other algorithms that read overlapping areas of the source image need to
keep source and output frames separate and should set that bit instead.
A filter can choose between in-place and copy operation depending on configuration parameters, but it cannot change in the middle of a render operation.
This chart outlines some of the effects of choosing the various buffering modes:
Characteristic | In-place | Copy |
---|---|---|
Speed | Generally faster, as only one buffer is accessed. | Generally slower, as two buffers are accessed. |
Memory usage | No additional memory. | Requires an additional frame buffer. |
Source frame access | The source frame is modified as pixels are processed, so only the remainder of the source can be read. | The source frame stays unmodified and all of it is valid for the entire image operation. |
Format conversion | Hard or impossible to output a different image format than the source. | Trivial to use different source and output formats. |
Appropriate filter types | Rendering filters, filters with only temporal processing, color transforms. | Filters that do resampling or warping or use spatial filter kernels. |
By default, the destination buffer defaults to having the same size as
the source buffer. This can be changed in paramProc
by altering the
parameters of the dst
field in the VDXFilterActivation
structure.
The src
field contains information about the input to the filter and
can be used to compute the output parameters:
long paramProc(VDXFilterActivation *fa, const VDXFilterFunctions *ff) {
fa->dst.w = 320;
fa->dst.h = fa->src.h;
fa->dst.AlignTo4();
return FILTERPARAM_SWAP_BUFFERS;
}
Changing the width and the height of the output image requires
recomputing the pitch
, modulo
, and size
fields as well; the inline
AlignTo4()
method does this for you.
It is also possible to change the parameters for the destination buffer
in an in-place filter. In fact, at the very least, fa->src.offset
should be copied to fa->dst.offset
in order for cropping to work. It
is also possible to alter the other parameters, allowing very fast
cropping (by altering data
, w
, and h
) and field dropping (by
altering h
and pitch
).
Video filters normally operate in one-frame-in, one-frame-out mode. However, it is sometimes necessary for a filter to reference a window of adjacent frames. There are two ways to do this.
The first way is to set the FILTERPARAM_NEEDS_LAST
flag in
paramProc
. This tells the host to preserve the previous source frame
fed to the filter, which is then accessible via fa->last
. The last
source frame buffer uses the same format as the source buffer. This
allows for a window of two frames.
The other way is to buffer the frames internally in the filter itself.
While this allows unlimited window sizes, there is still the problem
that the filter can only reference past frames in this manner, and not
future frames. The FILTERPARAM_HAS_LAG(count)
macro overcomes this by
indicating to the host that the filter produces frames with a delay. For
instance, returning a bitfield including FILTERPARAM_HAS_LAG(3)
means
that the filter outputs filtered frame 0 when it receives frame 3. This
permits source frame windows that include future frames as well as past
frames.
Note: Versions of VirtualDub prior to 1.9.1 do not support frame lags in filters when running in capture or frameserver mode.
In earlier versions of the filter API, the source and destination images
always use a 32-bit RGB format. In V12 and up, there is a parallel
enhanced set of image structures that support more advanced image
configurations. The mpPixmapLayout
field in the V12+ extended
VFBitmap
structure is used for this purpose:
long paramProc(VDXFilterActivation *fa, const VDXFilterFunctions *ff) {
const VDXPixmapLayout& pxsrc = *fa->src.mpPixmapLayout;
VDXPixmapLayout& pxdst = *fa->dst.mpPixmapLayout;
// check for a source format that we support
if (pxsrc.format != nsVDXPixmap::kPixFormat_YUV422_UYVY)
return FILTERPARAM_NOT_SUPPORTED;
// set old depth value to zero to indicate new pixmap layout should be used
fa->dst.depth = 0;
pxdst.w = 320;
pxdst.h = pxsrc.h;
pxdst.format = nsVDXPixmap::kPixFormat_YUV422_UYVY;
pxdst.pitch = 0;
return FILTERPARAM_SWAP_BUFFERS | FILTERPARAM_SUPPORTS_ALTFORMATS;
}
There are a few steps involved in enabling the new support:
- The API version must be determined to be at least V12. Otherwise,
accessing the
mpPixmapLayout
field may cause a crash or return invalid data. (If you declare at least V12 as the minimum API version for your filter, there is no need to check the version explicitly.) - The
depth
field of thefa->dst
structure must be set to zero to indicate thatmpPixmapLayout
is active. - Setting the
pitch
field of the pixmap layout to zero tells the host to establish default parameters for most fields in the layout, most notablydata
andpitch
. Onlyw
,h
, andformat
then need to be specified. - In order to allow arbitrary input formats to be received by
paramProc
, the function must return with theFILTERPARAM_SUPPORTS_ALTFORMATS
bit set (this is included inFILTERPARAM_NOT_SUPPORTED
). The host will then callparamProc
multiple times to determine which source formats can be used. Otherwise, the filter is assumed to only support 32-bit RGB and will never receive any other format.
Note: It isn't necessary for a filter to support 32-bit RGB input or output; it's acceptable for a filter to accept only YCbCr formats or other RGB formats. This may restrict when your filter can be used with some hosts, however, and will prohibit your filter from being used with any hosts that don't support at least API V12. However, VirtualDub, when confronted with a filter doesn't support the format produced by the previous filter, will search for a format that the filter does support, and then perform an implicit image conversion on entry to the filter.
The mFrameRateHi
and mFrameRateLo
fields of the VFBitmap
structure
indicate the frame rate of a video stream. The two fields form a
fraction, such that mFrameRateHi / mFrameRateLo
is the stream frame
rate. paramProc
can change the frame rate of a stream by altering the
value of these fields in the destination format (fa->dst
).
By default, when the frame rate of the video stream is changed, the host samples frames from the source according to timestamp, so that the video still plays at the same speed. This is done by choosing the source frame that corresponds to the time of the center of each output frame:
In most cases, the default behavior provides sensible and predictable
behavior. Occasionally, though, it may be desirable to directly control
the frame mapping in order to ensure consistency with the filter logic,
particularly in edge cases (such as the 100ms case above). This is
especially important for filters that rely on exact frame timing, such
as field split filters. Override the prefetchProc
entry point to take
control:
sint64 prefetchProc(const VDXFilterActivation *fa, const VDXFilterFunctions *ff, sint64 frame) {
// Map every pair of even and odd frames exactly to the same source frame.
return frame >> 1;
}
Note:prefetchProc
must be implemented in a thread-safe, reentrant
manner and be able to execute independently of runProc
. Prefetch
implementations that rely on image processing results are particularly a
no-no.
It is also possible to change the length of the video stream by altering
the mFrameCount
field. Because audio is not affected by changes in the
video filter chain, this is typically only useful when making a video
filter that is intended to correct a timing error or otherwise cooperate
with an audio filter that is making a similar change.
Copyright (C) 2007-2012 Avery Lee.