VirtualGL - shawfdong/hyades GitHub Wiki
The traditional method of displaying OpenGL applications remotely is through indirect rendering in a remote X server. This approach supports 3D hardware acceleration, but causes all of the OpenGL commands and 3D data to be sent over the network to be rendered on the client machine (the remote X server). This is not a tenable proposition unless the data is relatively small and static, unless the network is very fast, and unless the OpenGL application is specifically tuned for a remote X-Windows environment[1].
VirtualGL is an open source toolkit that gives any Unix or Linux remote display software the ability to run OpenGL applications with full 3D hardware acceleration. With VirtualGL, the OpenGL commands and 3D data are instead redirected to a 3D graphics accelerator (GPU) in the application server, and only the rendered 3D images are sent to the client machine.
We'll install VirtualGL and the high-speed, 3D-friendly X proxy TurboVNC on the GPU nodes.
I've added the package VirtualGL-2.3.3.x86_64.rpm to our local Rocks repository. So we can install VirtualGL simply with:
# yum install VirtualGL
VirtualGL requires access to a GPU in the application server so that it can create off-screen pixel buffers (Pbuffers) and redirect the 3D rendering from X windows into these Pbuffers. On Linux, the only way to share the application server’s GPU among multiple users is to grant those users access to the 3D X server (the X server attached to the GPU)[2].
# vglserver_config 1) Configure server for use with VirtualGL 2) Unconfigure server for use with VirtualGL X) Exit Choose: 1 Restrict 3D X server access to vglusers group (recommended)? [Y/n] n Restrict framebuffer device access to vglusers group (recommended)? [Y/n] n Disable XTEST extension (recommended)? [Y/n] Y ... Modifying /etc/security/console.perms to disable automatic permissions for DRI devices ... ... Creating /etc/modprobe.d/virtualgl.conf to set requested permissions for /dev/nvidia* ... ... Attempting to remove nvidia module from memory so device permissions will be reloaded ... ... Granting write permission to /dev/nvidia0 /dev/nvidiactl /dev/nvidia-uvm for all users ... ... /etc/X11/xorg.conf has been saved as /etc/X11/xorg.conf.orig.vgl ... ... Modifying /etc/X11/xorg.conf to enable DRI permissions for all users ... ... /etc/gdm/Init/Default has been saved as /etc/gdm/Init/Default.orig.vgl ... ... Adding xhost +LOCAL: to /etc/gdm/Init/Default script ... ... /etc/gdm/custom.conf has been saved as /etc/gdm/custom.conf.orig.vgl ... ... Disabling XTEST extension in /etc/gdm/custom.conf ... ... Setting default run level to 5 (enabling graphical login prompt) ... ... Commenting out DisallowTCP line (if it exists) in /etc/gdm/custom.conf ... Done. You must restart the display manager for the changes to take effect. IMPORTANT NOTE: Your system uses modprobe.d to set device permissions. You must execute rmmod nvidia with the display manager stopped in order for the new device permission settings to become effective.
Start the X server:
# telinit 5
I've added the package turbovnc-1.2.2.x86_64.rpm to our local Rocks repository. So we can install TurboVNC simply with:
# yum install turbovnc
Download TurboVNC viewer and install it on the client machine.
$ ssh hyades.ucsc.edu
From Hyades, SSH to one of the GPU nodes:
$ ssh gpu-2
Start a VNC desktop on the GPU node:
$ /opt/TurboVNC/bin/vncserver Desktop 'TurboVNC: gpu-2.local:1 (dong)' started on display gpu-2.local:1
Forward the TCP port TigerVNC server is listening to. We can use SSH Escape Sequences to enable port forwarding in the existing SSH session. Press the Enter key (to make sure the escape character is at the beginning of a line), then type ~C (tilde and capital C) to open a command line (started with the prompt ssh>), then specify the port forwarding:
ssh> -L localhost:5901:gpu-2:5901
Start TurboVNC viewer on the client machine, and connect to:
vnc://localhost:5901
VNC desktop itself is not hardware-accelerated:
$ glxinfo Xlib: extension "GLX" missing on display ":1.0".
But VirtualGL employs a technique called split rendering to force the 3D commands and data from the OpenGL application to go to the GPU (using vglrun):
$ vglrun glxinfo name of display: :1.0 display: :1 screen: 0 direct rendering: Yes server glx vendor string: VirtualGL server glx version string: 1.4 ... client glx vendor string: VirtualGL client glx version string: 1.4 ... GLX version: 1.4 ... OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: Tesla K20m/PCIe/SSE2 OpenGL version string: 4.4.0 NVIDIA 340.58 OpenGL shading language version string: 4.40 NVIDIA via Cg compiler ...It is illuminating to compare the above output with that in OpenGL on Nvidia K20.
By comparison, the tradition method of indirect rendering performs the OpenGL rendering on the client machine. In this case, the client machine is my Mid 2010 27-inch iMac:
$ ssh -Y hyades.ucsc.edu $ ssh -Y gpu-2 $ glxinfo name of display: localhost:10.0 display: localhost:10 screen: 0 direct rendering: No (If you want to find out why, try setting LIBGL_DEBUG=verbose) server glx vendor string: SGI server glx version string: 1.4 ... client glx vendor string: NVIDIA Corporation client glx version string: 1.4 ... GLX version: 1.4 ... OpenGL vendor string: ATI Technologies Inc. OpenGL renderer string: ATI Radeon HD 5750 OpenGL Engine OpenGL version string: 1.4 (2.1 ATI-1.24.38) ...