Install AD EYE on DPX2 - AD-EYE/AD-EYE_Core GitHub Wiki

Install AD-EYE on DPX2

The installation of AD-EYE on the Drive PX2 follows a similar procedure as in our computers with the dual setup. So far we have tested AD-EYE w/ the following configuration:

  • Ubuntu version: 16.04
  • ROS: Kinetic
  • OpenCV: 2.4.9.1
  • CUDA: 9.2

Below, steps are provided to install some of the dependencies (if different from the original links in the install guides [ADD REFERENCE HERE]) and how to tackle any errors or bugs you may encounter. After reseting the PX2 (w/ NVIDIA Drive OS), the system already comes w/ Ubuntu 16.04 and CUDA installed.

Update your system

For the DPX2, the first step is to make sure your system is up to date.

  • to download the latest updates:
sudo apt update
  • to install them:
sudo apt upgrade

CUDA

Note that CUDA is already installed right after flashing the board with the NVIDIA DRIVE OS. To recognize it:

echo "export PATH=/usr/local/cuda/bin/:\$PATH" >> ~/.bashrc
echo "export LD_LIBRARY_PATH=/usr/local/cuda/targets/aarch64-linux/lib:\$LD_LIBRARY_PATH" >> ~/.bashrc

source ~/.bashrc 
nvcc -V # check the version of the CUDA compiler

ROS Kinetic

The main instructions can be followed in Install-ROS-Kinetic.

Full list of steps:

Make sure you follow the steps below one by one:

  • setup your sources.list & keys:
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'

sudo apt-key adv --keyserver 'hkp://keyserver.ubuntu.com:80' --recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654
  • fetch updates & install libraries:
sudo apt-get update

sudo apt-get install -y build-essential cmake python-pip

sudo apt-get install -y checkinstall

sudo apt-get install -y libavutil-ffmpeg54

sudo apt-get install -y libswresample-ffmpeg1

sudo apt-get install -y libavformat-ffmpeg56

sudo apt-get install -y libswscale-ffmpeg3

sudo apt-get install aptitude

sudo aptitude install libssl-dev # follow instructions in the wiki for installing this library (and downgrade libssl-dev)

sudo apt-get install -y libnlopt-dev freeglut3-dev qtbase5-dev libqt5opengl5-dev libssh2-1-dev libarmadillo-dev libpcap-dev gksu libgl1-mesa-dev libglew-dev
  • Install ROS Kinetic and dependencies for building packages:
sudo apt-get install -y ros-kinetic-desktop-full

sudo apt-get install -y ros-kinetic-nmea-msgs ros-kinetic-nmea-navsat-driver ros-kinetic-sound-play ros-kinetic-jsk-visualization ros-kinetic-grid-map ros-kinetic-gps-common

sudo apt-get install -y ros-kinetic-controller-manager ros-kinetic-ros-control ros-kinetic-ros-controllers ros-kinetic-gazebo-ros-control ros-kinetic-joystick-drivers

sudo apt-get install -y ros-kinetic-camera-info-manager-py ros-kinetic-camera-info-manager

sudo apt-get install -y python-rosdep python-rosinstall-generator python-wstool python-rosinstall build-essential

sudo rosdep init
rosdep update
  • Environment setup (so that ROS is recognized):
echo "source /opt/ros/kinetic/setup.bash" >> ~/.bashrc
source ~/.bashrc

Errors you might meet:

  • If broken ROS dependencies show up when installing ROS, autoware or when running the simulation, they can be fixed by executing:
sudo apt install ros-kinetic-'name of package'

For instance, in our case we had a rviz package missing which got fixed by executing:

sudo apt install ros-kinetic-jsk-rviz-plugins
  • When you run sudo apt-get install ros-kinetic-desktop-full, you might get this error:
Reading package lists... Done

Building dependency tree

Reading state information... Done

Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation:

The following packages have unmet dependencies:  ros-kinetic-desktop-full :

Depends: ros-kinetic-desktop but it is not going to be installed

Depends: ros-kinetic-perception but it is not going to be installed

Depends: ros-kinetic-simulators but it is not going to be installed E: Unable to correct problems, you have held broken packages.

You should run sudo apt-get install aptitude and then sudo aptitude install libssl-dev to downgrade the version of libssl-dev.

NOTE: It's very important to notice the 0 to remove of the provided solution below, if it is not, rerun this command again!

adeye@tegra-ubuntu:/etc/apt/sources.list.d$ sudo aptitude install libssl-dev
The following NEW packages will be installed:
  libssl-dev{b} libssl-doc{a} 
0 packages upgraded, 2 newly installed, 0 to remove and 30 not upgraded.
Need to get 1,077 kB/2,123 kB of archives. After unpacking 9,388 kB will be used.
The following packages have unmet dependencies:
 libssl-dev : Depends: libssl1.0.0 (= 1.0.2g-1ubuntu4.2) but 1.0.2g-1ubuntu4.5 is installed.
The following actions will resolve these dependencies:

     Keep the following packages at their current version:
1)     libssl-dev [Not Installed]                         

Accept this solution? [Y/n/q/?] n
The following actions will resolve these dependencies:

     Install the following packages:                                            
1)     libssl-dev [1.0.2g-1ubuntu4 (xenial)]                                    

     Downgrade the following packages:                                          
2)     libssl1.0.0 [1.0.2g-1ubuntu4.5 (<NULL>, now) -> 1.0.2g-1ubuntu4 (xenial)]

Accept this solution? [Y/n/q/?] y
The following packages will be DOWNGRADED:
  libssl1.0.0 
The following NEW packages will be installed:
  libssl-dev libssl-doc{a} 
0 packages upgraded, 2 newly installed, 1 downgraded, 0 to remove and 30 not upgraded.
Need to get 2,849 kB of archives. After unpacking 9,457 kB will be used.
Do you want to continue? [Y/n/?] y
Get: 1 http://ports.ubuntu.com/ubuntu-ports xenial/main arm64 libssl1.0.0 arm64 1.0.2g-1ubuntu4 [726 kB]
Get: 2 http://ports.ubuntu.com/ubuntu-ports xenial/main arm64 libssl-dev arm64 1.0.2g-1ubuntu4 [1,046 kB]
Get: 3 http://ports.ubuntu.com/ubuntu-ports xenial-security/main arm64 libssl-doc all 1.0.2g-1ubuntu4.15 [1,077 kB]
Fetched 2,849 kB in 0s (5,572 kB/s)   
Preconfiguring packages ...
dpkg: warning: downgrading libssl1.0.0:arm64 from 1.0.2g-1ubuntu4.5 to 1.0.2g-1ubuntu4
(Reading database ... 166815 files and directories currently installed.)
Preparing to unpack .../libssl1.0.0_1.0.2g-1ubuntu4_arm64.deb ...
Unpacking libssl1.0.0:arm64 (1.0.2g-1ubuntu4) over (1.0.2g-1ubuntu4.5) ...
Selecting previously unselected package libssl-dev:arm64.
Preparing to unpack .../libssl-dev_1.0.2g-1ubuntu4_arm64.deb ...
Unpacking libssl-dev:arm64 (1.0.2g-1ubuntu4) ...
Selecting previously unselected package libssl-doc.
Preparing to unpack .../libssl-doc_1.0.2g-1ubuntu4.15_all.deb ...
Unpacking libssl-doc (1.0.2g-1ubuntu4.15) ...
Processing triggers for libc-bin (2.23-0ubuntu11) ...
Processing triggers for man-db (2.7.5-1) ...
Setting up libssl1.0.0:arm64 (1.0.2g-1ubuntu4) ...
Setting up libssl-dev:arm64 (1.0.2g-1ubuntu4) ...
Setting up libssl-doc (1.0.2g-1ubuntu4.15) ...
Processing triggers for libc-bin (2.23-0ubuntu11) ...

SSDCaffe

The main steps for installing SSDCaffe are listed here.

  • Note that SSDCaffe requires OpenCV:
sudo apt-get install libopencv-dev 

One can check if it is installed, using the following commands:

  • check installed OpenCV libraries/dependencies

dpkg -l | grep libopencv

  • check OpenCV version:

pkg-config --modversion opencv

  • The remaining dependencies are installed w/ the steps bellow:
sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev  

sudo apt-get install libhdf5-serial-dev protobuf-compiler

sudo apt-get install --no-install-recommends libboost-all-dev

sudo apt-get install libgoogle-glog-dev

sudo apt-get install liblmdb-dev

sudo apt-get install libopenblas-dev
  • Clone the SSDCaffe repository and switch to the recomended branch:
git clone-b ssd https://github.com/weiliu89/caffe.git ssdcaffe

cd ~/ssdcaffe

git checkout 4817bf8b4200b35ada8ed0dc378dceaf38c539e4
  • Follow the Install SSDCAFE guide to modify the Makefile and Makefile.config files.

  • Compile the library (so build is generated):

sudo make clean
make all -j6
make test -j6
make runtest -j6

make && make distribute # compile SSDCaffe
  • Add build path to ~/.bashrc file (so SSDCaffe is recognized):
echo "export LD_LIBRARY_PATH=/usr/local/cuda/lib64:/home/adeye/ssdcaffe/build/lib:\$LD_LIBRARY_PATH" >> ~/.bashrc
source ~/.bashrc

Errors you might meet:

Most of the errors met during this installation process and their solutions can be found Install SSDCaffe guide. However, here we emphasize other errors that may occur when running AD-EYE:

  • Fix error:
Permission denied: "/home/adeye/ssdcaffe/results/SSD_512X512"   
  [vision_ssd_detect-19] process has died

Solution: The path of the neural network in vision_ssd_detect is incorrect, it should be changed to the correct path. The path is set by the file deploy.prototxt which should be found in a path similar to what we found: /home/adeye/AD-EYE_Core/AD-EYE/Data/ssdcaffe_models/AD-EYE_SSD_Model/SSD_512x512. Note that we are assuming that AD-EYE_Core is in /home/adeye/.

The folder path code in deploy.prototxt can be found at lines 1821, 1824 and 1825. If you cannot find the file or line simply use the linux grep command, e.g. grep -Hrn 'search term' path/to/files where path/to/files can be omitted if you're already in the correct folder.

  • Fix error:
[vision_ssd_detect-18] process has died...

Solution: If the methods given here [CHANGE LINK] do not work, the following method can be tried:

  1. create a file caffe.conf in the folder /etc/ld.so.conf.d
  2. add the path of libcaffe.so.1.0.0-rc3 (found in /ssdcaffe/build/lib) into the file caffe.conf
  3. run sudo ldconfig

NOTE:

  1. As a complement to the modifications [CHANGE LINK] in the Makefiles, for the PX2, choose sm=61 and sm=62
  2. During the compilation process, make runtest will report several broken tests, but this won't cause a real error for the SSDCaffe not to work on DPX2.

More information here: https://devtalk.nvidia.com/default/topic/1066619/errors-when-build-the-single-shot-detector-ssd-on-px2/

Autoware and AD-EYE

In order to install autoware & AD-EYE, follow the main steps mentioned here.

PS: To install git-lfs use the one "Linux ARM64" in this link and follow the instructions from here.

You may find other errors different than those mentioned there:

Errors you might meet:

  • Missing package while building Autoware
CMake Error at /opt/ros/kinetic/share/catkin/cmake/catkinConfig.cmake:83 (find_package):
  Could not find a package configuration file provided by "nmea_msgs" with
  any of the following names:

    nmea_msgsConfig.cmake
    nmea_msgs-config.cmake

  Add the installation prefix of "nmea_msgs" to CMAKE_PREFIX_PATH or set
  "nmea_msgs_DIR" to a directory containing one of the above files.  If
  "nmea_msgs" provides a separate development package or SDK, be sure it has
  been installed.
Call Stack (most recent call first):
  CMakeLists.txt:4 (find_package)


---
Failed   <<< autoware_bag_tools [9.69s, exited with code 1]
Aborted  <<< op_simu [9.25s]
Aborted  <<< libvectormap [20.2s]
Aborted  <<< waypoint_follower [7min 16s]
Aborted  <<< imm_ukf_pda_track [4min 25s]

Summary: 26 packages finished [13min 6s]
  2 packages failed: autoware_bag_tools object_map
  4 packages aborted: imm_ukf_pda_track libvectormap op_simu waypoint_follower
  10 packages had stderr output: astar_search autoware_bag_tools kitti_player map_file ndt_cpu ndt_gpu object_map pcl_omp_registration vector_map_server waypoint_follower
  74 packages not processed

Solution: sudo apt-get update, and then followed by sudo apt-get install -y 'your_missing_package_name' (in this case it would be sudo apt-get install ros-kinetic-nmea-msgs. Note that some packages were not installed when sudo apt-get install -y ros-kinetic-desktop-full was executed).

Connection between DPX2 and Prescan (Windows) when testing AD-EYE

If the connection/communication between the prescan computer (host) and the PX2 is not working but no error messages are displayed on the host computer, it is most likely due to the argument of the command used to setup the connection. The command used is rosinit('IP_OF_COMPUTER') where IP_OF_COMPUTER can either be the network address or the set name associated with the IP. Due to a Prescan bug, the command should always use the name which is, unless changed, tegra-ubuntu.

To associate the IP with a name add the IP address and name to the file C:\Windows\System32\drivers\etc\hosts.

Precautions for embedded system

Disk space

The limited disk space of PX2 may cause errors during installation steps, so always keep an eye on the remaining space and clean up useless files. Some useful tips:

  1. Download stuff on a mobile hard drive, but care about the dependencies if installed software on a mobile hard drive.
  2. Use rosclean purge to clean up the ros log files. For more information: http://wiki.ros.org/rosclean

Errors you might meet:

  • Error to fix:

Fix "Package exfat-utils is... (Hard drive cannot be recognized)

Solution:

sudo add-apt-repository "deb http://archive.ubuntu.com/ubuntu $(lsb_release -sc) universe"
sudo add-apt-repository universe
sudo apt-get update
sudo apt-get install exfat-fuse exfat-utils 

Source: https://unix.stackexchange.com/questions/321494/ubuntu-16-04-package-exfat-utils-is-not-available-but-is-referred-to-by-anothe

Memory (RAM)

Ubuntu 16.04 on DPX2 has less than 6 Gb RAM space, while Autoware installation may need more space. This process will always get stuck and finally terminated with errors, besides keeping applications with large RAM space occupied closing (e.g. browser),swap space may be needed.

There is a trade-off between disk space and ram space, in our cases, we allocate 6-8 Gb (preferably 8 Gb) space for swapfile. Follow this guide for creating a swap space.

Measuring GPU utilization and performance

NVIDIA Nsight systems tools (including nvprof and nsight visual profiler) are performance tools provided by NVIDIA. They are part of the CUDA toolkit which should already be installed when the PX2 is flashed (the steps under Driver and CUDA, also given here)

Note however, that the tools can only be used remotely to profile the PX2 via an SSH connection between the host and target hardware (PX2). If the PX2 is flashed using an SDK manager, on the host, the SDK manager will install the CUDA toolkit that matches the one installed on the target, it's important that they match (!).

The CUDA toolkits and nsight systems performance tools that can be downloaded directly from the website are not supported on the PX2. Please refer to the following link: https://devtalk.nvidia.com/default/topic/1052052/profiling-drive-targets/connection-error/

Nvidia Nsight systems via SSH connection was not used here because the profiling process needs to be able to terminate and restart the application multiple times. This is problematic since we would need to terminate and restart the Prescan simulation as well, but this is difficult because we have no control or knowledge of when the tool is doing this.

Using nvprof and nsight visual profiler

You can generate a timeline using nvprof in the terminal, locally on the PX2. However, to visualise and get some statistics and recommendations of optimisations, use the host computer, import the timeline into a visual profiler to get some statistics on gpu utilisation of cuda applications (nodes).

  1. On target run:
nvprof --export-profile <path/to/file/timeline%p.prof> --profile-child-processes roslaunch adeye manager.launch
  1. Move files to any directory of your choice on host: go to directory where the files are saved and run the following /usr/local/cuda-9.2/libnvvp/nvvp <timeline#.prof> where <timeline#.prof> with correct filename and /usr/local/cuda-9.2/libnvvp/nvvp is the path to the visual profiler in the CUDA toolkit installed by the SDK manager.

For more information on nvprof and visual profiler refer to the NVIDIA documentation website: https://docs.nvidia.com/cuda/profiler-users-guide/index.html

Please also note that tegrastats does not provide correct dGPU statistics on the Drive PX2.

GPU memory usage

By compiling the code in the file gpustats.cu and running the executable file, information about all present GPUs will be printed in the terminal followed by the memory usage in percentage for the currently used GPU.

To compile the code, execute the following command in the terminal. Note that CUDA has to be installed before doing this step.

nvcc /path to file/gpustats.cu -o gpustats

To execute the runnable file created, execute the following command:

./path to file/gpustats

As stated above, the program starts by retrieving and printing the info for the present GPUs. It does so by using a function from the CUDA Runtime API which returns a cudaDeviceProp structure containing 69 data fields corresponding to the GPU. The function is executed on the host (CPU) and is stated as follows.

cudaGetDeviceProperties(cudaDeviceProp* prop, int  device)

where prop is a pointer to a cudaDeviceProp struct and device is an integer that encodes the ID of the wanted device. More information on what data fields are available in the structure and about the function could be found here.

After retrieving and printing the GPU info the program continues into a loop that retrieves the free and total device memory which is used to later calculate the used memory. Before calculating and printing the memory usage the program retrieves the device currently being used. More information can be found in the CUDA Runtime API guide from NVIDIA.

⚠️ **GitHub.com Fallback** ⚠️