Torch‐TensorRT Installation - jad-mansour/jetson-inference GitHub Wiki

The installation of Torch-TensorRT is not straightforward on the Jetsons. You need to compile the libraries from scratch.

Building from Source

To install Torch-TensorRT, you can build the library from source inside your docker image, where all deep learning libraries are installed. In case you have PyTorch installed on your Jetson natively, you can follow the same compilation steps. Before compilation however, you need to install Bazel on your system, whether natively or inside your container. Follow these steps to install Bazel. It is recommended you use the latest version which can be found in this GitHub repository.

  1. Go to the GitHub repository of Torch-TensorRT. You need to use version 1.3.0, as the newer versions currently do not support the NVIDIA L4T docker images (version 1.4.0 requires PyTorch 2.0.1, while the L4T-ML images pack PyTorch 2.0.0 and below). Inside the container, run:
git clone --branch v1.3.0 https://github.com/pytorch/TensorRT.git
  1. cd inside the directory, and run the command:
cp toolchains/jp_workspaces/WORKSPACE.jp50 WORKSPACE

This command is used to replace the Bazel workspace instructions. This ensures that the compilation uses the PyTorch libraries compiled and installed by NVIDIA on the Jetson.

  1. Finally, compile and install the library for Python with:
python3 setup.py install --jetpack-version 4.6 --use-cxx11-abi

Even though we choose JetPack 4.6, the compilation is backwards compatible. If you have a newer version of JetPack, the compilation should run successfully.

Precompiled Wheel

To avoid building from source, we provide a precompiled wheel file to install using pip on Jetson systems (JetPack 5.1.1). This wheel can be installed natively on the Jetson, or inside a docker container (preferred method). The wheel file can be found inside the repository here.

  1. You need to run your desired docker image. For this purpose, we choose to run the L4T PyTorch docker image. We specifically use the L4T-ML images.

  2. Create a shared docker volume linking your wheel file location to your desired location in the docker container.

  3. Inside the container, go to the directory where the wheel is present, and run:

pip3 install ./insert_wheel_filename_here

Now you can import Torch-TensorRT in your Python script on the Jetson!