NVidia Containers and Tools on Photon OS - dcasota/photonos-scripts GitHub Wiki
Isaac ROS
To make run Nvidia Isaac ROS, first prepare on Photon OS on WSL2 as described in https://github.com/dcasota/photonos-scripts/wiki/Photon-OS-on-WSL2. Then use the code snippets below.
# install nvidia container toolkit
cd $HOME
sudo tdnf install -y curl gpg
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /etc/pki/rpm-gpg/nvidia-container-toolkit-keyring.gpg
curl -s -L https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo | sudo tee -a /etc/yum.repos.d/nvidia-container-toolkit.repo
sudo tdnf makecache
sudo tdnf install -y nvidia-container-toolkit nvidia-docker2
sudo systemctl stop docker
sudo systemctl start docker
# https://github.com/docker/buildx#manual-download
mkdir -p $HOME/.docker/cli-plugins
cd $HOME/.docker/cli-plugins
wget https://github.com/docker/buildx/releases/download/v0.12.0/buildx-v0.12.0.linux-amd64
mv buildx-v0.12.0.linux-amd64 docker-buildx
sudo chmod +x docker-buildx
cd $HOME
# cuda toolkit
wget https://developer.download.nvidia.com/compute/cuda/12.4.1/local_installers/cuda_12.4.1_550.54.15_linux.run
sudo sh cuda_12.4.1_550.54.15_linux.run
PATH=/usr/local/cuda-12.4/bin:$PATH
LD_LIBRARY_PATH=/usr/local/cuda-12.4/lib64:$LD_LIBRARY_PATH
# tao toolkit
sudo tdnf install -y wget unzip python3-pip
sudo pip3 install --upgrade pip
sudo pip3 install virtualenvwrapper
export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3
source /usr/bin/virtualenvwrapper.sh
mkvirtualenv -p /usr/bin/python3 launcher
sudo pip3 install jupyterlab
sudo pip3 install nvidia-tao
export TERM=linux
sudo tdnf install -y git git-lfs
# add local user to docker
sudo newgrp docker
LOCALUSER="dcaso"
sudo usermod -aG docker $LOCALUSER
# need to log out and log back in for it to take effect.
exit
Log back in.
wsl -d $distroname -u $ROOTLESS_USER -e /bin/bash
Configure.
# see https://nvidia-isaac-ros.github.io/repositories_and_packages/isaac_ros_object_detection/isaac_ros_detectnet/index.html
cd $HOME
sudo mkdir -p $HOME/workspaces/isaac_ros-dev/src
export ISAAC_ROS_WS=${HOME}/workspaces/isaac_ros-dev/
mkdir $ISAAC_ROS_WS/src
cd src
sudo git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common.git
sudo git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_object_detection.git
cd isaac_ros_object_detection/isaac_ros_detectnet/
sudo git lfs pull -X "" -I "resources/rosbags"
cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
sudo sed -i "s/--runtime nvidia/--gpus=\"device=0\"/g" ./scripts/run_dev.sh
./scripts/run_dev.sh
Run inside the container.
sudo apt-get install -y ros-humble-isaac-ros-detectnet ros-humble-isaac-ros-triton ros-humble-isaac-ros-dnn-image-encoder
cd /workspaces/isaac_ros-dev/src/isaac_ros_object_detection/isaac_ros_detectnet && \
./scripts/setup_model.sh --height 632 --width 1200 --config-file resources/quickstart_config.pbtxt
cd /workspaces/isaac_ros-dev && \
ros2 launch isaac_ros_detectnet isaac_ros_detectnet_quickstart.launch.py
Detectnet works.
If you need to check the installation, run ros2 doctor.
#
ros2 doctor
# to list interfaces, run ros2 interface list. Also, run ros2 interface show sensor_msgs/msg/CameraInfo. So far, this the last step of findings.
ros2 interface list
Is it possible to do the same with own jpeg fotos?
DOES NOT WORK YET.
It should, but I haven't found a solution for that. Simply exchanging src/isaac_ros_object_detection/isaac_ros_detectnet/resources/test_image.jpg
does not work.
How to proceed through an USB camera?
DOES NOT WORK YET. It should , accordingly to https://forums.developer.nvidia.com/t/isaac-ros-object-detection-usb-camera/278457
sudo apt-get install ros-humble-usb-cam
cd /workspaces/isaac_ros-dev/src
# https://github.com/ros-drivers/usb_cam
git clone https://github.com/ros-drivers/usb_cam.git
cd usb_cam
rosdep install --from-paths src --ignore-src -y
colcon build
source install/setup.bash
hash -r
python -m pip install --upgrade pip
pip3 install pydantic
ros2 launch usb_cam camera.launch.py
fails with
[ERROR] [launch]: Caught exception in launch (see debug for traceback): Caught multiple exceptions when trying to load file of format [py]:
- PydanticUserError: If you use `@root_validator` with pre=False (the default) you MUST specify `skip_on_failure=True`. Note that `@root_validator` is deprecated and should be replaced with `@model_validator`.
For further information visit https://errors.pydantic.dev/2.7/u/root-validator-pre-skip
- InvalidFrontendLaunchFileError: The launch file may have a syntax error, or its format is unknown
Useful weblinks
- https://nvidia-isaac-ros.github.io/getting_started/index.html
- https://github.com/NVIDIA-ISAAC-ROS
- https://nvidia-isaac-ros.github.io/repositories_and_packages/isaac_ros_common/index.html
Various
Vulkansdk
See Nvidia VulkanSDK https://developer.nvidia.com/vulkan.
cd $HOME
sudo tdnf install -y doxygen wayland-devel libxkbcommon-devel wayland-protocols-devel
sudo tdnf install -y libXcursor-devel libXi-devel libXinerama-devel libXrandr-devel ninja-build
git clone https://github.com/glfw/glfw
cd glfw
sudo cmake -S . -B ./build -D GLFW_USE_WAYLAND=1
cd ..
sudo tdnf install -y rpm-build
sudo rpm -ivh https://kojipkgs.fedoraproject.org//packages/xcb-util-keysyms/0.4.0/16.fc35/x86_64/xcb-util-keysyms-0.4.0-16.fc35.x86_64.rpm
sudo rpm -ivh https://kojipkgs.fedoraproject.org//packages/xcb-util-keysyms/0.4.0/16.fc35/x86_64/xcb-util-keysyms-devel-0.4.0-16.fc35.x86_64.rpm
curl -J -L -O https://sdk.lunarg.com/sdk/download/1.3.268.0/linux/vulkansdk-linux-x86_64-1.3.268.0.tar.xz
sudo tar -xvf vulkansdk-linux-x86_64-1.3.268.0.tar.xz
cd 1.3.268.0/
sudo ./vulkansdk
cd ..
Triton Inference server
See https://github.com/triton-inference-server/server.
with docker container
This sample is with docker container.
# Step 1: Create the example model repository
git clone -b r23.12 https://github.com/triton-inference-server/server.git
cd server/docs/examples
sudo ./fetch_models.sh
# Step 2: Launch triton from the NGC Triton container
docker run --gpus="device=0" --rm --net=host -v ${PWD}/model_repository:/models nvcr.io/nvidia/tritonserver:23.12-py3 tritonserver --model-repository=/models &
# Step 3: Sending an Inference Request
# In a separate console, launch the image_client example from the NGC Triton SDK container
docker run -it --rm --net=host nvcr.io/nvidia/tritonserver:23.12-py3-sdk
/workspace/install/bin/image_client -m densenet_onnx -c 3 -s INCEPTION /workspace/images/mug.jpg
# Inference should return the following
Image '/workspace/images/mug.jpg':
15.346230 (504) = COFFEE MUG
13.224326 (968) = CUP
10.422965 (505) = COFFEEPOT
Triton Inference Server without docker container
Accordingly to https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/customization_guide/build.html#building-without-docker, you can make build triton without docker container.
sudo tdnf install -y python3-pip
sudo pip3 install requests
sudo pip3 install psutil
# todo /usr/local/lib/docker/cli-plugins/docker-buildx
git clone https://github.com/triton-inference-server/server
cd server
sudo ./build.py -v --enable-all
# TODO / UNFINISHED
# ./build.py stops with nvcc error : 'cicc' died due to signal 9 (Kill signal)
# https://github.com/microsoft/onnxruntime/issues/18579
deepstream-yolo3-gige application
See https://github.com/NVIDIA-AI-IOT/deepstream-yolo3-gige-apps The application idea can be extended into many applications, such as the highway trafic follow monitoring, industrial production line for quality control, supermarket safety control, etc. The application takes advantages of Nvidia DeepStream-5.1 SDK on the Yolo3 Object Detection Libraries - no training is needed but can be added as well.
sudo tdnf install -y meson glib-devel libxml2-devel libusb-devel gobject-introspection-devel gtk3-devel gstreamer-devel gstreamer-plugins-base-devel
git clone https://github.com/AravisProject/aravis
cd aravis
sudo meson build # sudo meson build --reconfigure
cd build
sudo ninja
sudo ninja install
# TODO / UNFINISHED
X Virtual Frame Buffer
A virtual display must be passed into the container, e.g. docker run -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=:1
.
To make run GPU supported in docker container for 3D rendering, the nvidia container runtime https://developer.nvidia.com/blog/gpu-containers-runtime is a prerequisite. X Virtual frame buffer makes applications run on the X server outside of the container and utilize the GPU device on the host machine.
Useful weblinks:
- https://medium.com/@benjamin.botto/opengl-and-cuda-applications-in-docker-af0eece000f1
- https://medium.com/@renaldaszioma/how-to-run-unity-rendering-on-amazon-cloud-or-without-monitor-699eed0ce963
- https://stackoverflow.com/questions/69671382/how-to-run-chrome-in-xvfb-with-gpu-nvidia-supported
- https://boygiandi.medium.com/ch%E1%BA%A1y-chrome-v%E1%BB%9Bi-gpu-tr%C3%AAn-instance-ec2-aws-69369d73235a
Afaik, Photon OS does not include a xvfb package. The following is an attempt to make run xvfb without a full X server. It installs packages from Fedora repository. NO VALUE. DO NOT USE.
# https://koji.fedoraproject.org/koji/buildinfo?buildID=
sudo tdnf install -y pixman libxkbcommon libxkbcommon-x11 libxt audit libunwind libxfont2 libglvnd-glx rpm-build
sudo rpm -ivh https://kojipkgs.fedoraproject.org//packages/libxkbfile/1.1.2/1.eln133/x86_64/libxkbfile-1.1.2-1.eln133.x86_64.rpm
sudo rpm -ivh https://kojipkgs.fedoraproject.org//packages/xkbcomp/1.4.6/7.eln133/x86_64/xkbcomp-1.4.6-7.eln133.x86_64.rpm
sudo rpm -ivh https://kojipkgs.fedoraproject.org//packages/xkeyboard-config/2.40/2.eln133/noarch/xkeyboard-config-2.40-2.eln133.noarch.rpm
sudo rpm -ivh https://kojipkgs.fedoraproject.org//packages/libXmu/1.1.4/1.fc38/x86_64/libXmu-1.1.4-1.fc38.x86_64.rpm
sudo rpm -ivh https://kojipkgs.fedoraproject.org//packages/xorg-x11-xauth/1.1.1/2.fc36/x86_64/xorg-x11-xauth-1.1.1-2.fc36.x86_64.rpm
sudo rpm -ivh https://kojipkgs.fedoraproject.org//packages/mesa/22.1.7/1.fc36/x86_64/mesa-libglapi-22.1.7-1.fc36.x86_64.rpm
sudo rpm -ivh https://kojipkgs.fedoraproject.org//packages/xorg-x11-server/1.20.14/28.fc38/x86_64/xorg-x11-server-common-1.20.14-28.fc38.x86_64.rpm
sudo rpm -ivh https://kojipkgs.fedoraproject.org//packages/xorg-x11-server/1.20.14/26.fc38/x86_64/xorg-x11-server-Xvfb-1.20.14-26.fc38.x86_64.rpm
/bin/Xvfb --help