NVIDIA‐based Video Analytics on Photon OS on WSL2 - dcasota/photonos-scripts GitHub Wiki

Vision AI solutions of NVIDIA have become very popular. Especially the DeepStream SDK as a complete streaming analytics toolkit based on GStreamer for AI-based multi-sensor processing, video, audio, and image understanding is ideal for vision AI.

Most examples work on NVIDIA EGX servers and NVIDIA Jetson appliances only.

Wouldn't it be practicle to test a few scenarios on a laptop with built-in NVIDIA GPU? This recipe shows how to configure a laptop for NVidia-based video-analytics.

The actual recipe is at a very early stage and allows to make use of a laptop's built-in camera. Additional features are e.g. video content detection, inferencing with data, audio, etc. aren't included.

Prerequisites

  • Laptop Lenovo Yoga Pro i9 with installed software drivers for NVIDIA RTX4070 GPU on Microsoft Windows 11 https://docs.nvidia.com/cuda/wsl-user-guide/index.html#nvidia-compute-software-support-on-wsl-2

  • Install Photon OS on WSL2 on the laptop. After the installation check nvidia-smi, /dev/video0 and v4l-utils.

  • On Photon OS on WSL2, install NVidia Container Toolkit.

  • On Photon OS on WSL2, check localhost connectivity e.g. through nginx container.

    sudo docker pull nginx
    sudo docker container run -p 80:80 --name nginx-container nginx
    # Open (in host browser) http://localhost
    
  • On Photon OS on WSL2, install initool for DeepStream pipeline configuration file modification purpose.

    sudo tdnf install -y curl tar git build-essential gmp-devel
    curl -J -L -O https://sourceforge.net/projects/mlton/files/mlton/20210117/mlton-20210117-1.amd64-linux-glibc2.31.tgz
    sudo tar -xzvf mlton-20210117-1.amd64-linux-glibc2.31.tgz
    cd mlton-20210117-1.amd64-linux-glibc2.31
    sudo make
    cd ..
    git clone https://github.com/dbohdan/initool
    cd initool
    sudo make
    sudo cp ./initool /sbin/initool
    cd ..
    sudo rm -r -f mlton-20210117-1.amd64-linux-glibc2.31
    sudo rm mlton-20210117-1.amd64-linux-glibc2.31.tgz
    sudo rm -r -f initool
    

Install and Configure DeepStream Docker Container

On Photon OS on WSL2, pull the deepstream docker container. See https://catalog.ngc.nvidia.com/orgs/nvidia/containers/deepstream. For a custom dGPU setup, triton-devel is sufficient.

# user=$oauthtoken
# password=<APIKey>
sudo docker login nvcr.io 
export DEEPSTREAMCONTAINERIMAGE="nvcr.io/nvidia/deepstream:6.4-gc-triton-devel"
# export DEEPSTREAMCONTAINERIMAGE="nvcr.io/nvidia/deepstream:6.4-triton-multiarch"
# export DEEPSTREAMCONTAINERIMAGE="nvcr.io/nvidia/deepstream:6.4-samples-multiarch"
sudo docker pull $DEEPSTREAMCONTAINERIMAGE

To make use of the video camera, we have to pass the usb camera on /dev/video0 to the docker container. The following docker run command maps /dev/video0, v4l2-ctl, and initool.

# In WSL2 on Windows 11
export DISPLAY=:0

sudo docker run \
     --gpus "device=0" -e CUDA_CACHE_DISABLE=0 --device /dev/video0 --privileged \
     -v $HOME/v4l-utils/build/utils/v4l2-ctl/v4l2-ctl:/sbin/v4l2-ctl -v /sbin/initool:/sbin/initool \
     -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix \
     -p 8554:8554 -p 5400:5400/udp $DEEPSTREAMCONTAINERIMAGE \
     /bin/bash

Configure Deepstream inside the docker container.

# https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Quickstart.html#dgpu-setup-for-ubuntu
# This recipe allows to carry out various scenarios.

apt-get update -y

apt install -y \
libssl3 libssl-dev libgstreamer1.0-0 gstreamer1.0-tools \
gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly \
gstreamer1.0-libav libgstreamer-plugins-base1.0-dev libgstrtspserver-1.0-0 \
libjansson4 libyaml-cpp-dev libjsoncpp-dev protobuf-compiler \
gcc make git python3

apt-get install -y ffmpeg

apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/3bf863cc.pub
add-apt-repository -y "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/ /"
apt-get install -y cuda-toolkit-12-2

apt-get install -y libnvinfer8=8.6.1.6-1+cuda12.0 libnvinfer-plugin8=8.6.1.6-1+cuda12.0 libnvparsers8=8.6.1.6-1+cuda12.0 \
libnvonnxparsers8=8.6.1.6-1+cuda12.0 libnvinfer-bin=8.6.1.6-1+cuda12.0 libnvinfer-dev=8.6.1.6-1+cuda12.0 \
libnvinfer-plugin-dev=8.6.1.6-1+cuda12.0 libnvparsers-dev=8.6.1.6-1+cuda12.0 libnvonnxparsers-dev=8.6.1.6-1+cuda12.0 \
libnvinfer-samples=8.6.1.6-1+cuda12.0 libcudnn8=8.9.4.25-1+cuda12.2 libcudnn8-dev=8.9.4.25-1+cuda12.2

cd /opt/nvidia/deepstream/deepstream
./update_rtpmanager.sh
./user_additional_install.sh

# https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_sample_configs_streams.html#scripts-included-along-with-package
cd /opt/nvidia/deepstream/deepstream/samples
./prepare_ds_triton_model_repo.sh
./prepare_ds_triton_tao_model_repo.sh
./prepare_classification_test_video.sh

Exit, and commit changes and start docker container again.

ID=`sudo docker ps -a | grep $DEEPSTREAMCONTAINERIMAGE -m 1 | awk '{ print $1 }'`
Name="deepstreamconfigured" # lowercase
sudo docker commit $ID $Name

# In WSL2 on Windows 11
export DISPLAY=:0
sudo docker run \
     --gpus "device=0" -e CUDA_CACHE_DISABLE=0 --device /dev/video0 --privileged \
     -v $HOME/v4l-utils/build/utils/v4l2-ctl/v4l2-ctl:/sbin/v4l2-ctl -v /sbin/initool:/sbin/initool \
     -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix \
     -p 8554:8554 -p 5400:5400/udp $Name \
     /bin/bash

Insight the container, check the usb camera functionality. Here a sample output of v4l2-ctl --list-devices.

Integrated Camera: Integrated C (usb-vhci_hcd.0-1):
        /dev/video0
        /dev/video1
        /dev/video2
        /dev/video3
        /dev/media0
        /dev/media1

/dev/video0 is our a usb camera source. List a few information about the camera.

# List all info about a given device
v4l2-ctl --all -d /dev/video0
# List the cameras pixel formats, images sizes, frame rates
v4l2-ctl --list-formats-ext -d /dev/video0

The usb camera light should show up when using gst-launch.

gst-launch-1.0 v4l2src device=/dev/video0

Check the deepstream-app functionality. Here a sample output of deepstream-app --version-all.

deepstream-app version 6.4.0
DeepStreamSDK 6.4.0
CUDA Driver Version: 12.4
CUDA Runtime Version: 12.2
TensorRT Version: 8.6
cuDNN Version: 8.9
libNVWarp360 Version: 2.0.1d3

Remote Display

Now, as there is no direct attached display, there must be another method. In this weblink there are two possibilities suggested: RTSP or UDP. There is a third option using VNC with a dummy display adapter, but it may not function as described here because Photon OS does not include any xserver-xorg-video packages. And the path 'WSL2 > Photon OS > DockerContainer' must be engineered.

For this reason, we are using the build-int rtsp functionality of Deepstream SDK.

Examples

Pipelines without/with inferencing

This part still is in progress.

v4l2src can be used to capture video from v4l2 devices. In the context of GStreamer, the v4l2src element is used to capture video from v4l2 devices, such as webcams and TV cards. It’s a crucial part of creating video pipelines. Let’s break down what "caps" mean in this context:

Capabilities (Caps): In GStreamer, caps (short for capabilities) define the format and properties of the data that flows through a pipeline. They specify details like image format, resolution, frame rate, and more. When configuring a pipeline with v4l2src, you can set caps to ensure that the captured video matches your desired format. For example, you might specify caps like "video/x-raw, format=NV12, width=640, height=480, framerate=30/1" to enforce a specific resolution and format. Usage: When constructing a pipeline, you can use caps to filter or modify the data produced by v4l2src. For instance, you might use a capsfilter element after v4l2src to set specific caps, ensuring that downstream elements (such as video sinks) receive the expected data format. Here’s an example pipeline using v4l2src with caps: gst-launch-1.0 v4l2src device=/dev/video0 ! capsfilter caps="video/x-raw, format=NV12, width=640, height=480, framerate=30/1" ! waylandsink This pipeline captures video from /dev/video0 (a webcam) and enforces the specified format before displaying it using waylandsink.

Remember that caps play a crucial role in ensuring data consistency and compatibility within GStreamer pipelines. They allow you to tailor the video stream to your requirements.

Deepstream pipelines can be build by

  • gst-launch elements pipe by pipe
  • deepstream app and deepstream-nvof-app: C source, python source, etc.

Nvidia devices support (Nvidia) Optical Flow functionality, but actually not for usb cameras, see Deepstream-nvof-app with usb camera.

Using gst-launch, the pipeline has a structure command < first element ! element ! ... >.

Here a very short description of a few basics.

command:

  • gst-launch-1.0 : gstreamer pipeline launcher. Use --gst-debug=3 or --gst-debug=6 to show debug information.
  • gst-inspect-1.0 <element [|element]> : show insight information of element(s).

first element:

  • filesrc : Specify an option e.g. location=mymjpeg.mkv.
  • v4l2src : used to capture video from v4l2 devices

v4l2src element next: If an usb camera used is able to provide an MJPEG stream e.g. in 640x480@30fps, you might use image/jpeg, video/x-raw and video/h-264 to tailor the stream.

  • image/jpeg
  • video/x-raw
  • video/h-264

elements: see https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_Intro.html ( and https://gstreamer.freedesktop.org/documentation/plugins_doc.html)

For non-nvof, a pipeline starting from an usb camera stream typically starts with gst-launch-1.0 v4l2src ! videoconvert ! nvvideoconvert.

For the custom dGPU environment, when using compileable C sources, what works so far is test-launch.c from gst-rtsp-server/examples at master · GStreamer/gst-rtsp-server · GitHub in DeepStream 6.4 docker container. ./test-launch '( v4l2src device=/dev/video0 ! image/jpeg,format=MJPG,width=640,height=480,framerate=30/1 ! jpegparse ! rtpjpegpay name=pay0 pt=96 )'

Inference example using infer_resnet_int8.txt. DOES NOT WORK YET.

In Deepstream 6.2, video formats with built-in compression were not supported, see https://forums.developer.nvidia.com/t/incorrect-camera-parameters-on-deepstream-6-0/231814.

Deepstream 6.4 comes with a sample configuration source1_usb_dec_infer_resnet_int8.txt. First, modify the configuration to make use of the usb camera.

With the initool we can easyily modify the settings of the configuration file.

cd /opt/nvidia/deepstream/deepstream/
export inifile="/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/source1_usb_dec_infer_resnet_int8.txt"
export bkupinifile="/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/source1_usb_dec_infer_resnet_int8.txt.bkup"
export newinifile="/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/source1_usb_dec_infer_resnet_int8.txt.new"
cp $inifile $bkupinifile
set -o pipefail
initool set $inifile tiled-display enable 0 \
| initool set - source0 enable 1 \
| initool set - source0 type 1 \
| initool set - source0 camera-width 640 \
| initool set - source0 camera-height 480 \
| initool set - source0 camera-fps-n 30 \
| initool set - source0 camera-fps-d 1 \
| initool set - source0 camera-v4l2-dev-node 0 \
| initool set - sink0 enable 0 \
| initool set - sink1 enable 0 \
| initool set - sink2 enable 1 \
| initool set - sink2 type 4 \
| initool set - sink2 codec 1 \
| initool set - sink2 enc-type 1 \
| initool set - sink2 rtsp-port 8554 \
| initool set - sink2 udp-port 5400 \
| initool set - osd enable 1 \
| initool set - streammux live-source 1 \
| initool set - streammux batch-size 1 \
| initool set - streammux width 640 \
| initool set - streammux height 480 \
| initool set - primary-gie enable 1 \
| initool set - primary-gie batch-size 1 \
| initool set - primary-gie model-engine-file "../../models/Primary_Detector/resnet18_trafficcamnet.etlt_b1_gpu0_int8.engine" \
| initool set - primary-gie config-file "config_infer_primary.txt" > $newinifile
cp $newinifile $inifile

Start the deepstream-app.

cd /opt/nvidia/deepstream/deepstream/
./bin/deepstream-app -c $inifile

Instead of using the inifile, as option, the following snippet creates the config file.

cat << EOF > $newinifile
################################################################################
# Copyright (c) 2018-2020, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
################################################################################

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=0
rows=1
columns=1
width=1280
height=720

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=1
camera-width=640
camera-height=480
camera-fps-n=30
camera-fps-d=1
camera-v4l2-dev-node=0

[sink0]
enable=0
#Type - 1=FakeSink 2=EglSink/nv3dsink(Jetson only) 3=File 4=RTSPStreaming 5=nvdrmvideosink
type=2
sync=0
plane-id=0
width=0
height=0
conn-id=1
source-id=0

[sink1]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
bitrate=2000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
# set profile only for hw encoder, sw encoder selects profile based on sw-preset
profile=0
output-file=out.mp4
source-id=0

[sink2]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=nvdrmvideosink
type=4
#1=h264 2=h265
codec=1
#encoder type 0=Hardware 1=Software
enc-type=1
sync=0
bitrate=4000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
# set profile only for hw encoder, sw encoder selects profile based on sw-preset
profile=0
# set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400


[osd]
enable=1
border-width=2
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0

[streammux]
##Boolean property to inform muxer that sources are live
live-source=1
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=640
height=480
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1

# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
model-engine-file=../../models/Primary_Detector/resnet18_trafficcamnet.etlt_b1_gpu0_int8.engine
#Required to display the PGIE labels, should be added even when using config-file
#property
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
#Required by the app for SGIE, when used along with config-file property
gie-unique-id=1
config-file=config_infer_primary.txt

[tests]
file-loop=0
EOF

cp $newinifile $inifile
./bin/deepstream-app -c $inifile

The pipeline starts and the model is loaded successfully. image

But the rtsp stream doesn't show up e.g. in VLC player.

Accordingly to this NVidia forum entry the source /opt/nvidia/deepstream/deepstream/sources/apps/apps-common/src/deepstream_source_bin.c has been hardcoded to "NV12" format which isn't supported e.g. on the laptop camera used. Hence, the source file has to be modified.

As a first attempt, "NV12" has been modifed to "MJPG" as the usb camera supports MJPG.

image

Replace "video/x-raw" with "image/jpeg" as well.

image

Now compile the modified deepstream-app.

cd /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-app
# CUDA_VER must match to the existing directory in /usr/local.
export CUDA_VER="12.2"
export LD_LIBRARY_PATH=/usr/local/cuda/include:$LD_LIBRARY_PATH
make

Now run the config file with the new compiled deepstream-app.

cd /opt/nvidia/deepstream/deepstream
./sources/apps/sample_apps/deepstream-app/deepstream-app -c $inifile

Unfortunately, this does not work yet. It stops with a 'Failed to link' issue.

root@e0532f8bdbbf:/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-app# ./deepstream-app -c $inifile

** ERROR: <create_camera_source_bin:173>: Failed to link 'src_cap_filter1' (image/jpeg, width=(int)640, height=(int)480, framerate=(fraction)30/1) and 'nvvidconv1' (video/x-raw, width=(int)[ 1, 2147483647 ], height=(int)[ 1, 2147483647 ], framerate=(fraction)[ 0/1, 2147483647/1 ], format=(string){ ABGR64_LE, BGRA64_LE, AYUV64, ARGB64_LE, ARGB64, RGBA64_LE, ABGR64_BE, BGRA64_BE, ARGB64_BE, RGBA64_BE, GBRA_12LE, GBRA_12BE, Y412_LE, Y412_BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, RGB10A2_LE, BGR10A2_LE, Y410, GBRA, ABGR, VUYA, BGRA, AYUV, ARGB, RGBA, A420, AV12, Y444_16LE, Y444_16BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, v210, UYVP, I420_10LE, I420_10BE, P010_10LE, NV12_10LE32, NV12_10LE40, P010_10BE, Y444, RGBP, GBR, BGRP, NV24, xBGR, BGRx, xRGB, RGBx, BGR, IYU2, v308, RGB, Y42B, NV61, NV16, VYUY, UYVY, YVYU, YUY2, I420, YV12, NV21, NV12, NV12_64Z32, NV12_4L4, NV12_32L32, Y41B, IYU1, YVU9, YUV9, RGB16, BGR16, RGB15, BGR15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE32, GRAY8 }; video/x-raw(ANY), format=(string){ ABGR64_LE, BGRA64_LE, AYUV64, ARGB64_LE, ARGB64, RGBA64_LE, ABGR64_BE, BGRA64_BE, ARGB64_BE, RGBA64_BE, GBRA_12LE, GBRA_12BE, Y412_LE, Y412_BE, A444_10LE, GBRA_10LE, A444_10BE, GBRA_10BE, A422_10LE, A422_10BE, A420_10LE, A420_10BE, RGB10A2_LE, BGR10A2_LE, Y410, GBRA, ABGR, VUYA, BGRA, AYUV, ARGB, RGBA, A420, AV12, Y444_16LE, Y444_16BE, v216, P016_LE, P016_BE, Y444_12LE, GBR_12LE, Y444_12BE, GBR_12BE, I422_12LE, I422_12BE, Y212_LE, Y212_BE, I420_12LE, I420_12BE, P012_LE, P012_BE, Y444_10LE, GBR_10LE, Y444_10BE, GBR_10BE, r210, I422_10LE, I422_10BE, NV16_10LE32, Y210, v210, UYVP, I420_10LE, I420_10BE, P010_10LE, NV12_10LE32, NV12_10LE40, P010_10BE, Y444, RGBP, GBR, BGRP, NV24, xBGR, BGRx, xRGB, RGBx, BGR, IYU2, v308, RGB, Y42B, NV61, NV16, VYUY, UYVY, YVYU, YUY2, I420, YV12, NV21, NV12, NV12_64Z32, NV12_4L4, NV12_32L32, Y41B, IYU1, YVU9, YUV9, RGB16, BGR16, RGB15, BGR15, RGB8P, GRAY16_LE, GRAY16_BE, GRAY10_LE32, GRAY8 }, width=(int)[ 1, 2147483647 ], height=(int)[ 1, 2147483647 ], framerate=(fraction)[ 0/1, 2147483647/1 ])
** ERROR: <create_camera_source_bin:225>: create_camera_source_bin failed
** ERROR: <create_pipeline:1863>: create_pipeline failed
** ERROR: <main:697>: Failed to create pipeline
Quitting
App run failed

This has been reported back to NVIDIA in https://forums.developer.nvidia.com/t/how-to-use-deepstream-app-with-mjpeg-format-stream-2nd-try/283245.

Without the change, deepstream-app starts but without a stream to the rtsp client.

 *** DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/ds-test ***

0:00:05.853288771   257 0x55e8e174a440 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b1_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x544x960
1   OUTPUT kFLOAT output_bbox/BiasAdd 16x34x60
2   OUTPUT kFLOAT output_cov/Sigmoid 4x34x60

0:00:05.998037009   257 0x55e8e174a440 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b1_gpu0_int8.engine
0:00:06.003709881   257 0x55e8e174a440 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt sucessfully

Runtime commands:
        h: Print this help
        q: Quit

        p: Pause
        r: Resume

** INFO: <bus_callback:301>: Pipeline ready

** INFO: <bus_callback:287>: Pipeline running


**PERF:  FPS 0 (Avg)
**PERF:  0.00 (0.00)
**PERF:  0.00 (0.00)
**PERF:  0.00 (0.00)
**PERF:  0.00 (0.00)

(deepstream-app:257): GLib-GObject-WARNING **: 09:38:58.959: g_object_get_is_valid_property: object class 'GstUDPSrc' has no property named 'pt'
**PERF:  0.00 (0.00)
**PERF:  0.00 (0.00)
**PERF:  0.00 (0.00)
**PERF:  0.00 (0.00)

(deepstream-app:257): GLib-GObject-WARNING **: 09:39:18.963: g_object_get_is_valid_property: object class 'GstUDPSrc' has no property named 'pt'
**PERF:  0.00 (0.00)
^C** ERROR: <_intr_handler:140>: User Interrupted..

TO BE CONTINUED.

RTSP server inside the DeepStream container

I have found the following method to setup a separate rtsp server inside the DeepStream container and to do a few tests.

curl -J -L -O https://gstreamer.freedesktop.org/src/gst-rtsp-server/gst-rtsp-server-1.20.3.tar.xz
tar -xf gst-rtsp-server-1.20.3.tar.xz
cd gst-rtsp-server-1.20.3
mkdir build
# apt-get install libgstreamer1.0-0 gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav gstreamer1.0-tools gstreamer1.0-x gstreamer1.0-alsa gstreamer1.0-gl gstreamer1.0-gtk3 gstreamer1.0-qt5 gstreamer1.0-pulseaudio
# add-apt-repository universe
# apt-get install libgstreamer1.0-dev libgstreamer-plugins-bad1.0-dev libgstreamer-plugins-base1.0-dev libgstreamer-plugins-good1.0-dev
apt install -y libgirepository1.0-dev libgstreamer-plugins-bad1.0-dev
meson setup build/
ninja -C build/
ninja -C build/ install
cd build/examples/
./test-readme

On a powershell command, enter usbipd state. Search for the usb camera client ip address.

image

Open VLC player, and enter rtsp://<usb camera client ip address>:8554/test. After a few seconds you see the test screen.

image

Now you can check the test-launch application which uses the usb camera. Accordingly to v4l2-ctl --list-formats-ext -d /dev/video0 the usb camera supports 6 formats. The first is 'MJPG'. It is a so-called image stream.

image

Be aware, not every resolution might be good for rtsp. For instance, the highest resolution 1280x720 as rtsp stream never worked on the specific lab, but good results were possible using 640x480. Here's a vl2src-pipeline for 'MJPG'.

./test-launch --gst-debug=3 '( v4l2src device=/dev/video0 ! image/jpeg,format=MJPG,width=640,height=480,framerate=30/1 ! jpegparse ! rtpjpegpay name=pay0 pt=96 )'
image

Warnings

When starting the rtsp example ./test-launch --gst-debug=3 '( v4l2src device=/dev/video0 ! image/jpeg,format=MJPG,width=640,height=480,framerate=30/1 ! jpegparse ! rtpjpegpay name=pay0 pt=96 )' there is a flooding of warnings 'newly allocated buffer is not free'.

image

Accordingly to https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3139 this is known as an issue in gstreamer 1.20 and has been fixed in gstreamer 1.22. Deepstream 6.4 comes with gstreamer 1.20.3 from Ubuntu. There is no supported single component update possibility, see https://forums.developer.nvidia.com/t/upgrading-the-gstreamer-version/186843/5.

Example 2: usb camera people count. NOT ENGINEERED YET TO RUN ON LAPTOP.

As example 1 does not work yet to make use of inferencing, the example found on Github https://github.com/katjasrz/deepstream-test1-usb-people-count hasn't been engineered yet for use on a laptop.

git clone https://github.com/katjasrz/deepstream-test1-usb-people-count
cd deepstream-test1-usb-people-count
wget https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplenet/versions/deployable_quantized_v2.5/files/resnet34_peoplenet_int8.etlt
wget https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplenet/versions/deployable_quantized_v2.5/files/labels.txt
wget https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplenet/versions/deployable_quantized_v2.5/files/resnet34_peoplenet_int8.txt
mv ./resnet34_peoplenet_int8.etlt ./model/resnet34_peoplenet_int8.etlt
mv ./resnet34_peoplenet_int8.txt ./model/resnet34_peoplenet_int8.txt
mv ./labels.txt ./model/labels.txt
make
./deepstream-test1-usb-people-count camera

Example 3: usb camera picture capture. NOT ENGINEERED YET TO RUN ON LAPTOP.

Accordingly to this weblink the open source utility nvgstcapture-1.0 is used to capture a camera picture. There are two several sources on the web, here and here. The 2nd weblink from Nvidia is for their Jetson Nano device.

Example 4: Tinkering with graph composer pipeline examples. NOT ENGINEERED YET TO RUN ON LAPTOP.

The following command performs the registration of extensions for the samples in https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Quickstart.html and starts execute_graph. However it stops with ERROR extensions/nvdsbase/nvds_scheduler.cpp@184: Failed to set GStreamer pipeline to PLAYING.

registry repo sync -n ngc-public
/opt/nvidia/graph-composer/execute_graph.sh deepstream-camera.yaml v4l2-usb-camera.parameters.yaml -d ../common/target_x86_64.yaml

A similar issue is mentioned in:

research weblinks

⚠️ **GitHub.com Fallback** ⚠️