Processing Backends - rg2/jhmr-v2 GitHub Wiki

Backends

While a generic CPU backend is supported for every program, select programs, such as those performing 2D/3D registration, may also leverage an OpenCL backend. Since the OpenCL backend enables computation to be performed on a GPU, tasks that benefit from performing many calculations in parallel, such as ray casting and pixel-wise image operations, are significantly faster using the OpenCL backend than the standard CPU backend.

When programs supporting the OpenCL backend are started, they first look for any compatible devices. If no compatible devices are found, the OpenCL backend interface is not provided to the user is forced to use the CPU backend. Otherwise, a default device is selected and the OpenCL backend is used unless the user manually selects the CPU backend. The backend flag may be used to manually specify the backend:

  • Select the OpenCL backend: --backend ocl
  • Select the generic CPU backend: --backend cpu.

The backend flag is not provided by programs that only support the CPU backend.

Selecting an OpenCL Backend Device

Unless a specific device is provided by the user, a default device is chosen when using the OpenCL backend. This may be undesirable for a number of scenarios, such as when the default device does not have sufficient resources to complete processing or when the default device is in use by another user on the system.

NVIDIA Devices

Specifying a specific NVIDIA device is most easily accomplished by using the nvidia-smi command and setting the CUDA_VISIBLE_DEVICES environment variable. For example running the nvidia-smi command on a system with 3 GPUs may output something similar to the following:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.116                Driver Version: 390.116                   |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX TIT...  Off  | 00000000:02:00.0 Off |                  N/A |
| 26%   41C    P0    83W / 250W |      0MiB /  6082MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GTX TIT...  Off  | 00000000:03:00.0 Off |                  N/A |
| 26%   25C    P8    12W / 250W |     11MiB /  6083MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   2  GeForce GTX TIT...  Off  | 00000000:83:00.0 Off |                  N/A |
| 26%   40C    P0    66W / 250W |      0MiB /  6083MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

The above output indicates that there are three GPUs on the system, each with 6 GB of total memory, and no GPU is in use. However, let's suppose that GPUs 0 and 1 were actually in use and 2 was free. We could therefore set the CUDA_VISIBLE_DEVICES environment variable to 2 and then run our command. For example, in bash shell:

export CUDA_VISIBLE_DEVICES=2
xreg-hip-surg-pelvis-single-view-regi-2d-3d <other program arguments and flags>

Other Devices (Including NVIDIA)

A specific device may also be set by passing an identifier string using the ocl-id flag. Towards the end of each program's help message prints a list of valid identifier strings. For example, a portion of the help message for the xreg-hip-surg-pelvis-single-view-regi-2d-3d program running on a Late 2013 MacBook Pro is listed below:

...

--ocl-id         Specify the OpenCL device to use with a unique identifier string - the available device ID strings may 
                 be obtained with the help print-out. The default behavior is to use the default device specified by the
                  boost::compute library, which may not be constant (e.g. it may vary depending on system resources, etc
                 .). (default: "" )
--backend        Specify the compute backend to use. Valid backends are: "ocl", "cpu". See the epilogue of this help me-
                 ssage for descriptions of each backend. (default: "ocl" )

...

3 Available OpenCL Devices (#: ID, Vender, Name):
  1. GeForceGT750M, NVIDIA, GeForce GT 750M
  2. Intel(R)Core(TM)[email protected], Intel, Intel(R) Core(TM) i7-4850HQ CPU @ 2.30GHz
  3. IrisPro, Intel, Iris Pro

2 compute backends are available:
  1. ocl: OpenCL processing, usually GPU, but could be CPU.
  2. cpu: Standard CPU processing, potentially using TBB.

...

The following command will use the integrated Intel GPU:

xreg-hip-surg-pelvis-single-view-regi-2d-3d --ocl-id IrisPro <other program arguments and flags>

The following command will use the discrete NVIDIA GPU:

xreg-hip-surg-pelvis-single-view-regi-2d-3d --ocl-id GeForceGT750M <other program arguments and flags>

The following command will use the CPU, but using the OpenCL backend not the generic CPU backend:

xreg-hip-surg-pelvis-single-view-regi-2d-3d --ocl-id "Intel(R)Core(TM)[email protected]" <other program arguments and flags>