GoldSrc:mirv_movie_depthdump - advancedfx/advancedfx GitHub Wiki

Syntax

mirv_movie_depthdump 0|1|2|3

  • Default: 0

Allows to enable output of images containing depth information:

value function image speed normal speed sampled
0 none n/a n/a n/a
1 linear linear depth dump fast fastest
2 logarithmic logarithmic depth dump slow slow
3 Inverse (OpenGl) inverse depth dump fastest fast

Split Streams

When using the mirv_movie_splitstreams (matte effects) have a look at mirv_depth_streams.

Tip: 23 Bit precision OpenEXR streams

See mirv_depth_exr

Tip: Depth slicing

The debug commands __mirv_depth_slice_lo and __mirv_depth_slice_hi allow to output only a slice of the depth buffer. This is useful in case your output format has way lower precision than your graphic card buffer, which is usually the case in the default settings (__mirv_depth_bpp).

This can be useful especially with function 3 (Inverse (OpenGl)).

Tip: Calculating actual distances with linear depthdump

You can use the linear depthdump to calculate actual distances for a specific pixel in the dump very easily.

The following information is required:

  • p: The index value in the linear depth dump of the pixel (in 0 - 255) we want to calculate the distance for
  • N: The zNear value, which tells us how far black pixels (index 0) are away. It depends on the map you used, to get the value load that map and execute __mirv_depth_info.
  • F: The zFar value, which tells us how far white pixels (index 255) are away. It depends on the map you used, to get the value load that map and execute __mirv_depth_info.

Now you can calculate the distance d as follows: d = (p/255)*(F-N) +N

Since one unit in Half-Life is said to be about 1 inch the result will be in inches. 1 inch is about 2.54 cm.

Please be aware of the limited accuracy, the maximum distance error for linear 8bit depthdump is about 2 to 3 foot on most maps, __mirv_depth_info will display a more accurate estimation for the maximum error based on the current map.

Technical details

When sampling is enabled, HLAE will internally linearise the depth buffer before sampling (and do further conversion after sampling). The sampling itself is averaging by integration, which happens with two-point approximation in the default sampling settings. So it's similar to what happens with opaque colour images upon sampling, but unlike those not based on a theoretical camera (well you can imagine collecting depth-rays if you want, I don't know haha).

See also

External links

⚠️ **GitHub.com Fallback** ⚠️