TensorRT Docker Clion - ashBabu/Utilities GitHub Wiki

NVidia Reference

  • docker pull nvcr.io/nvidia/pytorch:20.07-py3
  • Download the TensorRT 8 version (both .deb and .tar.gz)
  • export TRT_DEB_DIR_PATH=$HOME/trt_release # Change this path to where you’re keeping your .deb file
  • docker run --cap-add sys_ptrace -p2222:22 --rm --gpus all -ti --volume $TRT_DEB_DIR_PATH:/workspace/trt_release --net host nvcr.io/nvidia/pytorch:20.07-py3. sys_ptrace is for making clion find things inside docker. Play around with this command to get the container running if some problems happens
  • dpkg -i nv-tensorrt-repo-ubuntu1804-cuda11.0-trt8.0.0.3-ea-20210423_1-1_amd64.deb
  • apt-key add /var/nv-tensorrt-repo-ubuntu1804-cuda11.0-trt8.0.0.3-ea-20210423/7fa2af80.pub
  • apt-get update
  • apt-get install -y libnvinfer8 libnvinfer-plugin8 libnvparsers8 libnvonnxparsers8
  • apt-get install -y libnvinfer-bin libnvinfer-dev libnvinfer-plugin-dev libnvparsers-dev
  • apt-get install -y tensorrt

To run examples

In bashrc

  • export TENSORRT_DIR=/path/to/TensorRT-8.4.2.4/ (the download line above ) (come mistakes may be but it works atm)
  • export LD_LIBRARY_PATH=$TENSORRT_DIR/lib:$TENSORRT_DIR

Set up CLion-Docker

  • Follow the procedures as described here
  • In the docker environment, run which cmake and which gdb` to find the path to be set in CLion if it does not detect automatically
  • If cmake or gdb not detected at all, install with conda or apt-get.
  • Commit the changes to the docker as docker commit container_id commit_name. Open this commit in CLion to find the cmake and gdb