What is Quad Views rendering? - mbucchia/Quad-Views-Foveated GitHub Wiki
The target audience for this page ranges from end users with minimal experience/understanding of foveated rendering to application developers seeking guidance when implementing foveated rendering in their engine. The reader should familiarize themselves with the basic idea of foveated rendering prior to going through this document.
Table of Contents
- Overview
- Differences
- Application support
- This project
- The case of Digital Combat Simulator (DCS)
- References
Overview
Quad Views rendering, sometimes referred to as "multi-projection" or "dynamic projection", is a rendering technique that an application may use to implement foveated rendering.
With the typical stereo rendering, the application will render each eye at full resolution for the entire screen (or whichever render scale the user chose). The result is the application rendering all parts of the viewable scene with the same quality, regardless of where the user is looking at.
Increasing the resolution typically raises the image quality, but lowers the performance. Reducing the resolution typically raises the performance, but lowers the image quality.
With quad views rendering, the application may render the part of the viewable scene that is being directly looked at by the user at higher resolution (increasing quality in that area of the screen) while rendering the rest of the scene (that is in the peripheral view of the eye) at lower resolution (increasing performance by reducing the workload of the GPU).
This totals 4 views instead of the typical 2 views, hence the name "quad views" rendering.
Note that quad views rendering does not always imply eye-tracked foveated rendering. If the inner views are not programmed to follow the eye gaze, only fixed foveated rendering will be achieved. Quad views rendering needs to be used in conjunction with the eye tracker and additional logic that computes specific projections for the inner views in order to achieve eye-tracked foveated rendering.
This technique has multiple advantages:
- It does not need specific GPU support, it can work on any GPU.
- The same application code used to render stereo views can be reused to implement quad views rendering with very little changes.
- It can typically reduce both the shading and rasterization cost by ~65% of the total pixels count.
It may be challenging at first to really see the benefits of this approach. But comparing the number of pixels drawn with this technique, while the assessment of the image quality remains the same (or is even reported to be superior), yields very convincing evidence:
Headset | Rendering | Peripheral view | Pixel count | Focus view | Pixel count | Total pixel count | Gains |
---|---|---|---|---|---|---|---|
Quest Pro | Stereo | 2816 x 2896 | 8,155,136 | - | - | 8,155,136 | - |
Quad views | 1126 x 1158 | 1,303,908 | 1548 x 1592 | 2,464,416 | 3,768,324 | 46.21% of stereo | |
Varjo Aero | Stereo | 4148 x 3556 | 14,750,288 | - | - | 14,750,288 | - |
Quad views | 2192 x 1880 | 4,120,960 | 1200 x 1200 | 1,440,000 | 5,560,960 | 37.70% of stereo |
You may see some performance measurements captured will real-world application later in this document.
There are some limitations to this technique:
- The workload on the CPU may increase as the result of drawing the same geometry twice for the region overlapping the inner and outer views.
- It is incompatible with several visual effects. Varjo recommends disabling the following effects when doing multi-projection: Automatic/dynamic exposure, Bloom, Vignette, Chromatic aberration, Film grain, Depth of Field, Lens distortion, Motion blur, Panini projection, Screen Space Ambient Occlusion, Screen Space reflection (see Varjo's recommendations).
Differences
Quad views rendering is not the only technology that can be used to implement foveated rendering. Other techniques can be used, with different pros and cons. Here area few of them:
-
Multi-Res Shading (MRS): the application divides the screen into sparse rectangular regions (typically 9 regions to form a 3x3 grid) and sets up a viewport for each region with a different resolution. Due to the complexity of this setup, this technique can only produce fixed foveated rendering (no use of the eye tracker).
- Pros: supported on earlier GPUs (GTX series).
- Cons: quite complex to implement, necessitates reworking shaders in the application, cannot do dynamic foveated rendering.
- This technique is effective at reducing GPU workload.
- Examples of application implementing VRS for foveated rendering: Batman Arkham VR.
-
Variable Rate Shading (VRS): the screen is divided in tiles of 16x16 pixels, and the application can specify a shading rate for each tile (how many pixels in a tile must be rendered) through the so-called "shading mask" or "density mask". The application is responsible for generating such mask (using the eye gaze data from the headset and possibly user parameters such as "ring sizes") and inserting the appropriate commands during rendering to control when to use or not use variable rate shading.
- Pros: relatively easy to implement, allows a fully customizable density mask (eg: oval pattern) with gradient shading rates.
- Cons: not universally supported (eg: AMD does not support VRS with Direct3D 11, even with their newest GPU), gains are limited to applications limited by pixel shading only - the cost of rasterizing remains the same.
- This technique is effective at reducing GPU workload.
- Examples of application implementing VRS for foveated rendering: OpenXR Toolkit, vrperfkit.
-
Radial Density Masking: this technique was introduced by Alex Vlachos in his GDC presentation (starting at slide 22). Like VRS above, the application uses a mask to tell the pixel shaders to skip shading of every other pixel (discard the other ones), then uses a reconstruction filter in post-processing to recover the pixels that were skipped.
- Pros: relatively easy to implement.
- Cons: gains are limited, we can only reduce the shading rate by one half in the peripheral region.
- This technique is effective at reducing GPU workload.
-
Level Of Detail (LOD) using eye gaze: this technique is a variation of the typical "geometry culling", where an engine would compute which geometry/objects are visible prior to rendering the scene and given the current view frustrum. Just like with quad views rendering, the application would compute a second view frustrum corresponding to the eye gaze, and use it not for the purpose of culling, but for the purpose of reducing the level of detail for the geometry outside of this view frustrum.
- Pros: supported on any GPU.
- Cons: might require per-asset tuning of the LOD curve to avoid visible artifacts.
- This technique is effective at reducing CPU workload and GPU workload.
Application support
An application needs to be developed specifically for quad views rendering: it is not a style of programming that your platform (eg: Oculus) can simply "force" into any application.
The most "raw" form of quad views foveated rendering can in theory be achieved with any OpenVR or OpenXR application. The developer of the application can do all the work themselves:
- Retrieve the current views (full FOV camera projections for each eye) based on the current headset tracking data;
- Render the outer views using the projections obtained in 1);
- Retrieve the eye gaze (eye tracking data) information;
- Compute the partial FOV camera projections for the inner views using the full FOV projections from 1) and the eye gaze data from 3);
- Render the inner views using the projections obtained in 4);
- Submit all 4 views to the platform.
There has been several challenges in the past with the steps above.
The main challenge has been retrieving the eye tracking data (step 3), which was not possible universally across all devices. It typically involved using the software interface (SDK) specific to the headset being targeted by the application.
Another challenge was the difficulty of integrating the 2 additional views (step 4 and 6). Computing the FOV for the inner projection may require a lot of tweaking and submitting the rendered views must account for the projection relative to the full screen (the offset and distortion of that view so that it "pastes" well on top of the stereo projections).
Fortunately, OpenXR can provide the tools to address these challenges:
- OpenXR defines a universal interface for retrieving eye tracking data.
- OpenXR can easily composite multiple views into a single frame.
- OpenXR even defines some (optional) interfaces to do all of the quad views foveated rendering heavy lifting.
Varjo has pioneered the quad views rendering front and supports out-of-the-box two OpenXR extensions, XR_VARJO_quad_views
and XR_VARJO_foveated_rendering
to aim at making quad views foveated rendering seamless.
With these OpenXR extensions, the application flow now becomes much simpler:
- Retrieve the current views (both for the full FOV and the foveated region);
- Render the outer views using the projections obtained in 1);
- Render the inner views using the projections obtained in 1);
- Submit all 4 views at once to the platform.
While Meta has been providing support for some foveated rendering techniques in standalone mode (application running Android on the headset) through the XR_META_foveation_eye_tracked
extension, they are very far behind on the PCVR front:
- The Oculus OpenXR runtime does not provide support for these OpenXR extensions;
- Meta opted to not adopt the universal interface for eye tracking, and is instead using their own interface;
- The Oculus compositor does not support compositing views with partial FOV ("fovMutable").
This project extends the Oculus OpenXR runtime to expose the quad views and foveated rendering OpenXR extensions just like the Varjo runtime does out-of-the-box.
Unreal Engine support
The Varjo Plugin for Unreal Engine integrates quad views rendering and foveated rendering, making it very simple to integrate these functionalities in your project:
Image courtesy of Varjo
This is the plugin and options that Pavlov VR uses to ship foveated rendering to its users today.
While it may be counter-intuitive, this support is not specific to Varjo headsets. Any platform may implement the XR_VARJO_quad_views
and XR_VARJO_foveated_rendering
extensions in their OpenXR runtime or add them via 3rd party software (like Quad-Views-Foveated) and make applications developed with the Varjo OpenXR plugin work with foveated rendering.
The developer must be careful with the features and visual effects they use once they enabled quad views foveated rendering. See Varjo's recommendations regarding which effects do not play well with quad views foveated rendering.
Unity support
Unlike the Varjo plugin for Unreal Engine, the Varjo plugin for Unity does not support OpenXR and cannot leverage OpenXR support for quad views rendering.
However, the base Unity OpenXR Plugin can do quad views rendering, but that feature is somehow a bit hidden.
After setting up support for OpenXR in your Unity project, create a new project script which instantiates a blank OpenXR feature that requests support for XR_VARJO_quad_views
. Here is an example of such script:
using System;
using UnityEditor;
#if UNITY_EDITOR
using UnityEditor.XR.OpenXR.Features;
#endif
namespace UnityEngine.XR.OpenXR.Features.QuadViews
{
#if UNITY_EDITOR
[OpenXRFeature(UiName = "Quad Views",
BuildTargetGroups = new []{BuildTargetGroup.Standalone, BuildTargetGroup.WSA},
Company = "Unity",
Desc = "Enables quad views rendering.",
DocumentationLink = "",
FeatureId = "com.unity.openxr.features.quadviews",
OpenxrExtensionStrings = "XR_VARJO_quad_views",
Version = "1")]
#endif
public class QuadViewsOpenXRFeature : OpenXRFeature
{
}
}
You may now go to 'Project Settngs' -> 'XR Plug-in Management' -> 'OpenXR' then under the OpenXR plugin check the new 'Quad Views' feature that the script added.
The developer must be careful with the features and visual effects they use once they enabled quad views foveated rendering. See Varjo's recommendations regarding which effects do not play well with quad views foveated rendering.
This project
Motivation
This project aims at providing a functional implementation of what is needed by an application to use quad views foveated rendering with OpenXR. There are only two high-volume PC applications at the time of writing [June 2023] implementing support for quad views rendering: Digital Combat Simulator (DCS) and Pavlov VR. Both are extremely successful games with a wide audience. However they can only benefit of quad views foveated rendering on Varjo headsets (and soon Pimax Crystal with PimaxXR).
By bringing support for quad views foveated rendering with OpenXR to many headsets including the Meta Quest Pro, the author hopes to demonstrate to application developers that implementing quad views rendering support produces a superior VR experience (through better performance allowing either or both a smoother experience or higher quality settings). The author also hopes to raise awareness with headset platform vendors of the importance of providing tools to assist with foveated rendering support for PC applications.
Last but not least, the author really enjoys writing OpenXR support for features that delight VR users on PC!
Implementation details
Quad-Views-Foveated is an OpenXR API layer that is loaded between the application and the Oculus OpenXR runtime. It advertises support for the XR_VARJO_quad_views
and XR_VARJO_foveated_rendering
extensions that the Oculus OpenXR runtime otherwise lacks of.
The API layer is built on the customizable OpenXR-Layer-Template template that accelerates development of experiments and features for OpenXR.
The API layer then implements all functionalities needed by these extensions on top of what the Oculus OpenXR runtime offers. It effectively adds the XrViewConfigurationType
for quad views rendering (XR_VIEW_CONFIGURATION_TYPE_PRIMARY_QUAD_VARJO
) and implements all related features for that view configuration, notably:
- It extends
xrEnumerateViewConfigurationViews()
to enumerate the desired resolutions for both sets of views. These resolutions are in turned used by the application to allocate swapchains for the rendering. - It sets up eye tracking through the
XR_FB_eye_tracking_social
and adapts the "social eye tracking" data to be used to compute the projected eye gaze. - It extends
xrLocateViews()
to return the projection for all 4 views. It piggybacks on the poses and projections for theXR_VIEW_CONFIGURATION_TYPE_PRIMARY_STEREO
view configuration, and creates the extra set of projections based on the eye gaze data and user parameters (such as the section of the FOV to render for the inner views). - Upon frame submission with
xrEndFrame()
, it combines the 4 views into a single stereo projection layer, drawing the inner views om top of the outer views.
Due to the lack of support for fovMutable
in the Oculus OpenXR runtime, the API layer must project the inner views into a backbuffer that spans the entire FOV, with the unused regions left transparent. This is done through a simple shader.
Because multiple projection layers may dramatically increase the cost of motion reprojection (ASW) or even disable this capabilities, the API layer draws all views into a single stereo projection layer, rather than submitting two stereo projection layers with the proper alpha-blending configuration.
In addition to these required functionalities, the API layer also implement extra features to improve the experience:
- AMD Contrast Adaptive Sharpening (CAS) is applied to the inner views in order to improve the sharpness and perceived quality of the foveated region.
- The edges of the inner views are gradually alpha-blended with the outer views to create a smoother transition between the pixels densities of the foveated region and the peripheral region.
- Turbo Mode, a signature feature from OpenXR Toolkit, is integrated. It is not related to foveated rendering, but this tweak to the application frame timing is now very popular and usable with Quad-Views-Foveated.
The end-to-end processing in the API layer to composite the views and apply the various effects is below 200 microseconds on both the CPU and the GPU. This overhead is basically negligible in comparison to the gains from drawing much less pixels.
Extensibility
This project currently targets headsets with eye-tracking support via OpenXR, either through the XR_EXT_eye_gaze_interaction
extension (cross-vendor generalist eye gaze API, supported on Varjo, Pimax and Vive headsets) or the XR_FB_eye_tracking_social
extension (Meta's proprietary extension for social eye tracking, supported on Quest Pro). However the OpenXR quad views/foveated extensions logic, the projection computation and the views composition are all generic and may be used as-is with any OpenXR runtime.
The code to access the eye tracker may be rewritten to support any other eye tracker, the only API to expose to the rest of the API layer is through the getEyeGaze()
function that returns a unit vector representing the gaze in 3D space, relative to the center of the eyes.
This code, licensed under the very permissive MIT license, can be used to bring the quad views/foveated OpenXR extensions to any device that exposes the headset's eye tracking data on the PC. As a matter of fact, the author has made this project work HP Reverb G2 Omnicept simply by extending the initialization of the eye tracker and the implementation of getEyeGaze()
, and it took less than 30 minutes to do so.
Note that the API layer only supports Direct3D 11 applications today, but the author may add support for additional graphics API as necessary.
The case of Digital Combat Simulator (DCS)
The story
Digital Combat Simulator (DCS) is a popular combat flight simulation game developed primarily by Eagle Dynamics.
In early 2022, support for quad views rendering was published and advertised to increase clarity on Varjo VR-3 and XR-3 headsets. These headsets are built with 2 panels for displaying content at different resolution. This effectively constitutes fixed foveated rendering.
In early 2023, Eagle Dynamics ported DCS to OpenXR, and with it the use of the XR_VARJO_quad_views
extension. This however continued to provide only fixed foveated rendering, as the use of that extension without the XR_VARJO_foveated_rendering
extension does not provide a dynamic projection that moves the inner views with the eye gaze.
The Varjo-Foveated OpenXR API layer was quickly developed in March 2023 to force the game to request and setup XR_VARJO_foveated_rendering
, effectively adding support for eye tracking to the quad views (non-foveated) rendering support. With this addition, users of Varjo Aero, VR-3 and XR-3 could now enjoy eye-tracked foveated rendering (or "DFR" for Dynamic Foveated Rendering, as the community quickly adopted to refer to this mode) and superior performance and quality.
In parallel to Varjo-Foveated, the author started developing XR_VARJO_quad_views
and XR_VARJO_foveated_rendering
support into their OpenXR runtime for Pimax headsets, PimaxXR. This implementation convinced the author that supporting quad views rendering in an OpenXR runtime is no more complex than leveraging stereo views support and multiple composition layers.
UPDATE 07/10/23: The support for quad views foveated rendering in PimaxXR is being reworked to rely on the Quad-Views-Foveated code.
Armed with the learnings from Varjo-Foveated and PimaxXR and a recurring demand from Quest Pro users, the Quad-Views-Foveated project was born.
The performance
At the time of writing [June 2023], users of Varjo headsets have been enjoying Varjo-Foveated for just under 3 months, and the response has been immensely positive.
Here are some measurements captured with DCS and Varjo Aero by two different users on high-end systems, with many settings cranked all the way up:
In each chart, the top bar corresponds to the use of quad views with foveated rendering, while the bottom graph corresponds to stereo views without foveated rendering.
References
- Foveated Rendering [Varjo]
- Multi-Res Shading (MRS) [Nvidia]
- Variable Rate Shading (VRS) [Nvidia]
- Advanced VR Rendering Performance [Alex Vlachos]
- The OpenXR Specification [Khronos]