One Page Wiki - AD-EYE/AD-EYE_Core GitHub Wiki

This page is a concatenation of all the others for easy ctrl-f. The underlined text ending by .md represents the name of the file containing the text that follows


Add-road-and-actors-to-Pex-file.md

Process to add road or actor on the Pex file

Create a new experiment with Prescan or use existing one.

Copy Matlab files add_nametoadd and writenametoaddToPexFile where nametoadd is the name of the road or actor you would like to add, like BezierRoad. Paste in on the same folder than your experiment. You also need to have Matlab files xml2struct and strcut2xml on this folder.

You need to have a template experiment. The variable pathToTemplatePex = ['C:\Users\adeye\Desktop\real_world_data\TemplatePexFile\TemplatePexFile.pex']; at the end of the Matlab file add_nameoftheroadneed to have the good path to the template experiment. The template pex file must contain the road or the actor, you would like to add on your experiment.

On the Matlab file add_nameoftheroad change information of the road like you would. For example, you can change orientation and position of the road.

Information neceessary to put correctly on Pex file

Bezier road

To add bezier road you need:

  • position of the road (coordinate x1 y1 z1 on the picture) [meter]
  • orientation of the road (angle between x axis of the road and x axis of the map) (angle b on the picture) [radian]
  • relative heading of the road (correspond to the angle α on the picture) [radian]
  • difference between end and start coordinate of the raod (corresponding to ΔX, Δy and Δz on the picture). We use the road frame and not the current frame. So coordinate in this frame [meter]
  • Entry tension and exit tension [meter]

Flex road

To add flex road you need:

  • position of the road (coordinate x1 y1 z1 on the picture) [meter]
  • orientation of the road (angle between x axis of the road and x axis of the map) (angle b on the picture) [radian]
  • relative heading of the road (correspond to the angle α on the picture) [degree]
  • difference between end and start coordinate of the raod (corresponding to ΔX, Δy and Δz on the picture). We use the road frame and not the current frame. So coordinate in this frame [meter]
  • characteristics of turns: position [meter], entry tension [meter], exit tension [meter], heading [radian].

X and Y crossing

To add X or Y crossing you need:

  • position of the road (coordinates x1 y1 z1 on the picture) [meter]
  • orientation of the road (angle between x axis of the road and x axis of the map) [radian]
  • type of the road i.e. 'Y' or 'X'
  • angle between x axis of the road and the branch for all branches. (a, b, c, d on the picture) [radian]

Roundabout

To add roundabout you need:

  • position of the road (coordinates x1 y1 z1 on the picture) [meter]
  • orientation of the road (angle between x axis of the road and x axis of the map) [radian]
  • angle between x axis of the road and the branch for all branches. (a,b,c,d on the picture) [radian]

Actors

To add actors you need:

  • type of actor. For exemple, 'car' or 'tree'
  • position of the actor [meter]
  • orientation of the actor if it is not a tree

Possible errors and important information

Prescan visualisation

Matlab codes don't permit to update the visualisation in Prescan. If you has open the experiment before run the script, you need to close it and open again. After, build the map.

Possible errors

When you build the map, it is possible an index error appear, like in the picture. To fix that, you need to click on autofix numerical id's. Then build again.


AD-EYE-on-PX2.md

Current versions

  • Embedded plateform: PX2 AutoChauffeur(P2379)
  • Ubuntu: 16.04
  • ROS: Kinetic
  • OpenCV: 2.4.9.1
  • CUDA: 9.2

Installation

For NVIDIA is continuously updating the software support for Drive PX2, methods on this page are only guaranteed to be valid before November 27, 2019. Please follow up on any updates from NVIDIA after this date; get official support in the NVIDIA developer forum for Drive PX2.

Driver and CUDA

NVIDIA SDK Manager is provided to flash the board. The needed GPU driver, CUDA and cuDNN are included. The highest version is SDK Manager 0.9.14.4964 with CUDA 9.2 for now. To know GPU information, run sample /usr/local/cuda/samples/1_Utilities/deviceQuery.

Install DRIVE with SDK Manager provides the step-by-step installation guide for using the SDK Manager. Note that during the installation you will be asked in SDK-Manager about the HARDWARE CONFIGURATION at STEP 01, if you want to use Nsight toolkit for monitoring GPU usage, please select both "Host Machine" and "Target Hardware(PX2 AutoChauffeur)". The reason is in the later section Measuring GPU utilization and performance. During the flashing process: connecte the USB 2.0 for Debug(PX2 interface) and the host PC with USB A-A cable.(Source: Hardware QuickStart Guides-->DRIVE PX 2 AutoChauffeur)

Source: https://devtalk.nvidia.com/default/topic/1066116/gpu-driver-of-px2-autochauffeur-p2379-/

If the lower version of CUDA is needed, choose past versions of SDK to flash the board.

Source: https://devtalk.nvidia.com/default/topic/1066757/zed-camera-working-on-drive-px-2/

ROS Kinetic

Follow the instruction below

Install ROS Kinetic

Errors you might meet:

  1. If broken ROS dependencies show up when installing ROS or when running the simulation they can be fixed by executing:
sudo apt install ros-kinetic-'name of package'

In our case we had a rviz package missing which got fixed by executing:

sudo apt install ros-kinetic-jsk-rviz-plugins
  1. When you run sudo apt-get install ros-kinetic-desktop-full, you might get this error:
Reading package lists... Done

Building dependency tree

Reading state information... Done

Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation:

The following packages have unmet dependencies:  ros-kinetic-desktop-full :

Depends: ros-kinetic-desktop but it is not going to be installed

Depends: ros-kinetic-perception but it is not going to be installed

Depends: ros-kinetic-simulators but it is not going to be installed E: Unable to correct problems, you have held broken packages.

You should run sudo apt-get install aptitude sudo aptitude install libssl-dev to downgrade the version of libssl-dev

(Tip: It's very important to notice "0 to remove" of the provided solution, if it's not, rerun this command again)

adeye@tegra-ubuntu:/etc/apt/sources.list.d$ sudo aptitude install libssl-dev
The following NEW packages will be installed:
  libssl-dev{b} libssl-doc{a} 
0 packages upgraded, 2 newly installed, 0 to remove and 30 not upgraded.
Need to get 1,077 kB/2,123 kB of archives. After unpacking 9,388 kB will be used.
The following packages have unmet dependencies:
 libssl-dev : Depends: libssl1.0.0 (= 1.0.2g-1ubuntu4.2) but 1.0.2g-1ubuntu4.5 is installed.
The following actions will resolve these dependencies:

     Keep the following packages at their current version:
1)     libssl-dev [Not Installed]                         

Accept this solution? [Y/n/q/?] n
The following actions will resolve these dependencies:

     Install the following packages:                                            
1)     libssl-dev [1.0.2g-1ubuntu4 (xenial)]                                    

     Downgrade the following packages:                                          
2)     libssl1.0.0 [1.0.2g-1ubuntu4.5 (<NULL>, now) -> 1.0.2g-1ubuntu4 (xenial)]

Accept this solution? [Y/n/q/?] y
The following packages will be DOWNGRADED:
  libssl1.0.0 
The following NEW packages will be installed:
  libssl-dev libssl-doc{a} 
0 packages upgraded, 2 newly installed, 1 downgraded, 0 to remove and 30 not upgraded.
Need to get 2,849 kB of archives. After unpacking 9,457 kB will be used.
Do you want to continue? [Y/n/?] y
Get: 1 http://ports.ubuntu.com/ubuntu-ports xenial/main arm64 libssl1.0.0 arm64 1.0.2g-1ubuntu4 [726 kB]
Get: 2 http://ports.ubuntu.com/ubuntu-ports xenial/main arm64 libssl-dev arm64 1.0.2g-1ubuntu4 [1,046 kB]
Get: 3 http://ports.ubuntu.com/ubuntu-ports xenial-security/main arm64 libssl-doc all 1.0.2g-1ubuntu4.15 [1,077 kB]
Fetched 2,849 kB in 0s (5,572 kB/s)   
Preconfiguring packages ...
dpkg: warning: downgrading libssl1.0.0:arm64 from 1.0.2g-1ubuntu4.5 to 1.0.2g-1ubuntu4
(Reading database ... 166815 files and directories currently installed.)
Preparing to unpack .../libssl1.0.0_1.0.2g-1ubuntu4_arm64.deb ...
Unpacking libssl1.0.0:arm64 (1.0.2g-1ubuntu4) over (1.0.2g-1ubuntu4.5) ...
Selecting previously unselected package libssl-dev:arm64.
Preparing to unpack .../libssl-dev_1.0.2g-1ubuntu4_arm64.deb ...
Unpacking libssl-dev:arm64 (1.0.2g-1ubuntu4) ...
Selecting previously unselected package libssl-doc.
Preparing to unpack .../libssl-doc_1.0.2g-1ubuntu4.15_all.deb ...
Unpacking libssl-doc (1.0.2g-1ubuntu4.15) ...
Processing triggers for libc-bin (2.23-0ubuntu11) ...
Processing triggers for man-db (2.7.5-1) ...
Setting up libssl1.0.0:arm64 (1.0.2g-1ubuntu4) ...
Setting up libssl-dev:arm64 (1.0.2g-1ubuntu4) ...
Setting up libssl-doc (1.0.2g-1ubuntu4.15) ...
Processing triggers for libc-bin (2.23-0ubuntu11) ...

ssdCaffe

Follow the steps here: Install SSDCAFE and most of the errors met and their solutions can be found in the same website in Note section.

Fix error: Permission denied: "/home/adeye/ssdcaffe/results/SSD_512X512" [vision_ssd_detect-19] process has died

Solution: the path to vision-ssd-detect is incorrect, it should be changed to the correct path. The path is set by the file deploy.prototxt which should be found in a path similar to what we found: /home/nvidia/AD-EYE_Core/AD-EYE/Data/ssdcaffe_models/AD-EYE_SSD_Model/SSD_512x512. The folder path code in deploy.prototxt can be found at lines 1821, 1824 and 1825. If you cannot find the file or line simply use the linux grep command, e.g. grep -Hrn 'search term' path/to/files where path/to/files can be omitted if you're already in the correct folder.

Fix "[vision_ssd_detect-18] process has died..."

If the methods given here doesn't work, another method can be tried: create a file caffe.conf in the folder /etc/ld.so.conf.d; add the path of libcaffe.so.1.0.0-rc3 into the file caffe.conf; run sudo ldconfig.

Note

  1. As a complement to this Modifications; for PX2, sm=61 and sm=62.
  2. During the compilation process, make runtest will report several broken tests, but this won't cause a real error on PX2.

Source: https://devtalk.nvidia.com/default/topic/1066619/errors-when-build-the-single-shot-detector-ssd-on-px2/

Autoware and AD-EYE

Install Autoware and AD EYE

PS: To build only one package use this: catkin_make --only-pkg-with-deps <package_name>

Fix error: Missing package while installing Autoware

Error message: cmake error at /opt/ros/kinetic/share/catkin/cmake/catkinConfig.cmake:83 Could not find a package configuration file provided by "autoware_build_flags"

Solution: sudo apt-get update, sudo apt-get install -y + your missing package name

Testing

If the connection/communication between the prescan computer (host) and the PX2 is not working but no error messages are displayed on the host computer, it is most likely due to the argument of the command used to setup the connection. The command used is rosinit('IP_OF_COMPUTER') where IP_OF_COMPUTER can either be the network address or the set name associated with the IP. Due to a Prescan bug, the command should always use the name which is, unless changed, "tegra-ubuntu". To associate the IP with a name add the IP address and name to the file C:\Windows\System32\drivers\etc\hosts.

Measuring GPU utilization and performance

NVIDIA Nsight systems tools (including nvprof and nsight visual profiler) are performance tools provided by NVIDIA. They are part of the CUDA toolkit which should already be installed when the PX2 is flashed (the steps under Driver and CUDA, also given here)

Note however, that the tools can only be used remotely to profile the PX2 via an SSH connection between the host and target hardware (PX2). If the PX2 is flashed using an SDK manager, on the host, the SDK manager will install the CUDA toolkit that matches the one installed on the target, it's important that they match. The CUDA toolkits and nsight systems performance tools that can be downloaded directly from the website are not supported on the PX2. Please refer to the link below: https://devtalk.nvidia.com/default/topic/1052052/profiling-drive-targets/connection-error/

Nvidia Nsight systems via SSH connection was not used here because the profiling process needs to be able to terminate and restart the application multiple times. This is problematic since we would need to terminate and restart the prescan simulation as well but this is difficult because we have no control or knowledge of when the tool is doing this.

Using nvprof and nsight visual profiler

You can generate a timeline using nvprof in the terminal, locally on the PX2. However, to visualise and get some statistics and recommendations of optimisations, use the host computer, import the timeline into visual profiler to get some statistics on gpu utilisation of cuda applications(nodes).

on target run: nvprof --export-profile <path/to/file/timeline%p.prof> --profile-child-processes roslaunch adeye manager.launch

move files to any directory of your choice on host: go to directory where the files are saved and run the following /usr/local/cuda-9.2/libnvvp/nvvp <timeline#.prof> where <timeline#.prof> with correct filename and /usr/local/cuda-9.2/libnvvp/nvvp is the path to the visual profiler in the CUDA toolkit installed by the SDK manager.

For more information on nvprof and visual profiler refer to the NVIDIA documentation website: https://docs.nvidia.com/cuda/profiler-users-guide/index.html Please also note that tegrastats does not provide correct dGPU statistics on the Drive PX2.

GPU memory usage

By compiling the code in the file gpustats.cu and running the executable file, information about all present GPUs will be printed in the terminal followed by the memory usage in percentage for the currently used GPU. To compile the code, execute the following command in the terminal. Note: CUDA has to be installed before doing this step.

nvcc /path to file/gpustats.cu -o gpustats

To execute the runnable file created, execute the following command.

./path to file/gpustats

As stated above, the program starts by retrieving and printing the info for the present GPUs. It does so by using a function from the CUDA Runtime API which returns a cudaDeviceProp structure containing 69 data fields corresponding to the GPU. The function is executed on the host (CPU) and is stated as follows.

cudaGetDeviceProperties(cudaDeviceProp* prop, int  device)

where *prop is a pointer to a cudaDeviceProp structure and device is the id of the wanted device. More information on what data fields are available in the structure and more information about the function could be found here.

After retrieving and printing the GPU info the program continues into a loop that retrieves the free and total device memory which is used to later calculate the used memory. Before calculating and printing the memory usage the program retrieves the device currently being used. All functions used can be found in the CUDA Runtime API.

Precautions for embedded system

Disk space

The limited disk space of PX2 may cause errors during installation steps, so always keep an eye on the remaining space and clean up useless files. Maybe useful tips:

  1. Download stuff on a mobile hard drive; but care about the dependencies if installed software on a mobile hard drive.
  2. Use rosclean purge to clean up the ros log files. Source: http://wiki.ros.org/rosclean

Fix "Package exfat-utils is..." (Hard drive cannot be recognized)

sudo add-apt-repository "deb http://archive.ubuntu.com/ubuntu $(lsb_release -sc) universe"
sudo add-apt-repository universe
sudo apt-get update
sudo apt-get install exfat-fuse exfat-utils 

Source: https://unix.stackexchange.com/questions/321494/ubuntu-16-04-package-exfat-utils-is-not-available-but-is-referred-to-by-anothe

Memory (RAM)

Ubuntu 16.04 on PX2 has less than 6 Gb RAM space, while Autoware installation may need more space. This process will always get stuck and finally terminated with errors, besides keeping applications with large RAM space occupied closing (e.g. browser),swap space may be needed. There is a trade-off between disk space and ram space, in our cases, we allocate 2G space for swapfile. Guide to Creating Swap Space


Autonomous-Driving-Intelligence-(ADI).md

Architecture

In the current state the Autonomous Driving Intelligence (ADI) is composed of two separate control channels:

  • The nominal channel
  • The safety channel

The ADI is entirely running using ROS as a middleware. The channels define different frames that are described here.

The nominal channel

The nominal channel is the channel in control during nominal conditions. It is based on the open source project Autoware.

A deeper description of the detection stack can be found here.

The safety channel

The safety channel was entirely developed at KTH. Its role is to monitor the nominal channel and to make sure it stays within its operational envelope.

It has a basic perception stack that uses a euclidean clustering node on the Lidar's data and adds the hull of of the detected clusters to a layer of the gridmap. The safety supervisor uses fault monitors to monitor different types of faults. This gridmap is then flattened and used by the safe stop motion planner (SSMP).

ROS structure and features

To start the different components several launch files were made. Those files correspond to features of the ADI.

Feature name
Recording
Map
Sensing
Localization
Fake_Localization
Detection
Mission_Planning
Motion_Planning
Switch
SSMP
Rviz
Experiment specific recording

The manager node is in charge of starting and stopping those features according to a state machine.

Model of the Platform

To see a representation of the system, refer to https://gits-15.sys.kth.se/AD-EYE/ADI_Capella where the whole platform has been modeled.

This model can be compared with what is running in real time using the rqt_graph command (see here for more information).


BugAutoware.md

Bug report following the installation of Autoware :

Every dependency correctly installed (cf wiki) and the following software:

  • Ros Kinetic

  • Ubuntu 16,04

  • CUDA 10.1

  • QT 5.5.1

Bugs encountered

During the installation of Autoware (w/ the repo AD-EYE_Core on git), 5 bugs were encountered :

  • **The imported target “vtkRenderingPythonTkwidgets” references the file “/usr/lib/x86_64-linux-gnu/lib vtkRenderingPythonTkwidgets.so “ but the file does not exist.

Possible reasons : etc etc

Fixed

By installing the right package (being libvtk6-dev check this website https://packages.ubuntu.com/search?suite=trusty&arch=any&mode=exactfilename&searchon=contents&keywords=libvtkRenderingPythonTkWidgets.so to see which package you should install depending on the path written in the error) AND if this doesnt work by creating a Symlink of this package with the command :

sudo ln -s /usr/lib/python2.7/dist-packages/vtk/libvtkRenderingPythonTkWidgets.x86_64-linux-gnu.so /usr/lib/x86_64-linux-gnu/libvtkRenderingPythonTkWidgets.so


  • The imported target “vtk” references the file “/usr/bin/vtk “ but the file does not exist.

    Possible reasons : etc etc

Fixed

w/ sudo update-alternatives --install /usr/bin/vtk vtk /usr/bin/vtk6 10


  • CMake Error at /usr/lib/x86_64-linux-gnu/cmake/Qt5Gui/Qt5GuiConfig.cmake:27 (message):

    The imported target "Qt5::Gui" references the file

    "/usr/lib/x86_64-linux-gnu/libEGL.so"

    but this file does not exist. Possible reasons include:

    - The file was deleted, renamed, or moved to another location.

    - An install or uninstall procedure did not complete successfully.

    - The installation package was faulty and contained...

But in facts this file exists. And I had the same problem with libGL (instead of libEGL) just after that.

Fix :

To solve it, make symlinks as follow :

run the following commands :

For libEGL:

ls /usr/lib/x86_64-linux-gnu | grep -i libegl --> find the file name to do the symlink

In my case:

sudo rm /usr/lib/x86_64-linux-gnu/libEGL.so

sudo ln -s /usr/lib/x86_64-linux-gnu/libEGL.so.1.1.0 /usr/lib/x86_64-linux-gnu/libEGL.so

For libGL:

ls /usr/lib/x86_64-linux-gnu | grep -i libgl --> find the file name to do the symlink

In my case:

sudo rm /usr/lib/x86_64-linux-gnu/libGL.so

sudo ln -s /usr/lib/x86_64-linux-gnu/libEGL.so.1.1.0 /usr/lib/x86_64-linux-gnu/libGL.so


  • /usr/include/pcl-1.7/pcl/point_cloud.h:586:100 : error: template-id ‘getMapping’ used as a declarator friend boost::shared_ptr& detail::getMaping(pcl::PointCloud &p);

    usr/include/pcl-1.7/pcl/point_cloud.h:586: 100 : error: ‘GetMapping’ is neither fucntion nor member function; cannot be declared friend cciplus: error: expected ‘;’at the end of member declaration

    usr/include/pcl-1.7/pcl/point_cloud.h:586: 111: error: expected ‘)’before ‘&’ token

This error seems to be linked with the following error :

  • Cmake error at ndt_gpu_generated_Registration.cu.o.cmake:266 (message):

    Error generating file /home/adeye/AD-EYE_Core/Autoware_Private_Fork/ros/build/computing/perception/localization/lib/ndt_gpu/CmakeFiles/ndt_gpu.dir/src/./ndt_gpu_generated _Registration.cu.o computing/perception/localization/lib/ndt_gpu/CmakeFiles/ndt_gpu.dir/build.make:84 : recipe for target ‘computing/perception/localization/lib/ndt_gpu/CmakeFiles/ndt_gpu.dir/src/./ndt_gpu_generated

    _Registration.cu.o’ failed

    make[2]: ***

    [omputing/perception/localization/lib/ndt_gpu/CmakeFiles/ndt_gpu.dir/src/./ndt_gpu_generated

    _Registration.cu.o] error 1

    make[2]: ***

    Waiting for unfinished jobs...

Those 2 errors crashed the installation at ~70% w/ the following fatal error : make -j32 -i32 failed The Cmake error log mentionned a pthread_create undefined All of those errors pointed to files related (in part) to CUDA, and some research on the internet confirmed that CUDA 10.1 wasn’t a good fit with Autoware. A downgrade to CUDA 10.0 fixed those 2 errors and the installation of Autoware was completed without any pb. CCL : CUDA Version should be : Adapted to the GPU and be below 10.1 for this particular version of Autoware


CAN-communication.md

Set CAN bus configuration on PX2

DPX2 CAN ports mapping table

On NVIDIA DRIVE PX2 , CAN ports are mapped in this way:

  • can-5 -> can0 on Tegra A (CAN E)
  • can-6 -> can1 on Tegra B (CAN F)
  • can-1 -> channel A on Aurix
  • can-2 -> channel B on Aurix
  • can-3 -> channel C on Aurix
  • can-4 -> channel D on Aurix

Usage of CAN ports of drive px2

Setup guide

You can do this by following this link : https://forums.developer.nvidia.com/t/drivepx2-easycan-setup-guide/60271

To find these files :

  • LINUX_LXTHREADSX86_DrivePxApp.elf
  • EasyCanConfigFile.conf
  • DrivePxApp.conf

You can find in DPX2-P2379-EB-V4.02.02_release.zip at this path : ~/nvidia/nvidia_sdk/DRIVE_OS_5.0.10.3_SDK_with_DriveWorks_Linux_OS_PX2_AUTOCHAUFFEUR/DriveSDK/drive-t186ref-foundation/utils/scripts on Host PC.

Issues

If you're having trouble synchronizing the time between Aurix and Tegra, you can solve the problem by following this link : https://forums.developer.nvidia.com/t/how-to-set-aurix-time-on-px2/71172

Sending CAN messages

Sending random messages to CAN-E or CAN-F :

(On Tegra A)

  • sudo ip link set canX type can bitrate 500000 or can1 if you are on Tegra B
  • sudo ip link set up can0
  • cangen can0 (on another terminal)
  • candump can0

Send random messages to can bus A-D :

  • cd /usr/local/driveworks/bin/
  • sudo ./sample_canbus_logger --driver=can.aurix --params=ip=10.42.0.146,bus=a --hwtime=0 --send_i_understand_implications=1000

You can see specifications of this sample here :

https://docs.nvidia.com/drive/driveworks-4.0/dwx_canbus_logger_sample.html

Send a specific message to can bus A-D :

  • cd /usr/local/driveworks/bin/
  • sudo ./sample_canbus_cansend --driver=can.aurix --params=ip=10.42.0.146,bus=a --hwtime=0 --send_i_understand_implications=1000 --send_id=90 --data=0123456789ABCDEF

Send a can frame to can bus A-F :

  • From Home : ./cansend_function

Reading can messages :

Can messages can be read by using CANoe software on Windows.


Sending CAN messages from Aurix

(On host PC)

  • Connect to PX2 via USB 2.0 host hub
  • Use sudo minicom -D /dev/ttyUSB1 (USB1 for Aurix, USB2 for Tegra A, USB6 for Tegra B)
  • You can use cancyclic a on to generate cyclic frames on can bus A
  • You can use cansend a 0x50 0xffff0000 0xaaaa5555 to send a can frame on bus A with ID 0x50 (Aurix)

Different IDs to use for can bus A-F :

Pdu: Dpx_Aurix_Tx_ChnA_80T   node: a id: 0x00000050
Pdu: Dpx_TegraA_Tx_ChnA_112T node: a id: 0x00000070
Pdu: Dpx_TegraB_Tx_ChnA_144T node: a id: 0x00000090
Pdu: Dpx_Aurix_Tx_ChnB_81T   node: b id: 0x00000051
Pdu: Dpx_TegraA_Tx_ChnB_113T node: b id: 0x00000071
Pdu: Dpx_TegraB_Tx_ChnB_145T node: b id: 0x00000091
Pdu: Dpx_Aurix_Tx_ChnC_82T   node: c id: 0x00000052
Pdu: Dpx_TegraA_Tx_ChnC_114T node: c id: 0x00000072
Pdu: Dpx_TegraB_Tx_ChnC_146T node: c id: 0x00000092
Pdu: Dpx_Aurix_Tx_ChnD_83T   node: d id: 0x00000053
Pdu: Dpx_TegraA_Tx_ChnD_115T node: d id: 0x00000073
Pdu: Dpx_TegraB_Tx_ChnD_147T node: d id: 0x00000093
Pdu: Dpx_Aurix_Tx_ChnE_84T   node: e id: 0x00000054
Pdu: Dpx_TegraA_Tx_ChnE_116T node: e id: 0x00000074
Pdu: Dpx_TegraB_Tx_ChnE_148T node: e id: 0x00000094
Pdu: Dpx_Aurix_Tx_ChnF_85T   node: f id: 0x00000055
Pdu: Dpx_TegraA_Tx_ChnF_117T node: f id: 0x00000075
Pdu: Dpx_TegraB_Tx_ChnF_149T node: f id: 0x00000095

Reading messages :

To see if the messages are received, open CANoe on Windows PC, start the simulation and open the TRACE window.

CAN sender app

Description

This application allows a CAN frame to be sent from one of the two Tegra units and read out via CANoe on another Windows computer.

Prerequisites

For use this application , we need SDK Manager with a complete host and target deployed SDK version.(neccesary 120 GB of free disk space)

Installation

You can use this link for download it : ​https://developer.nvidia.com/drive/downloads#?tx=$product,drive_px2

The user guide : ​https://docs.nvidia.com/drive/archive/5.1.0.2L/sdkm/download-run-sdkm/index.html

After installation, launch it with sdkmanager --archivedversions

Follow this wiki to step 4 and skip the rest: https://github.com/AD-EYE/To_be_deleted_AD-EYE_Core-Docs-Tmp/blob/master/drive_px2_guides/2_Reset-DPX2.md

Instructions

(On Host PC)

Create a new file to send CAN frame :

  • Go to cd $HOME/ros2can/ros2can-master/cansend_app/src
  • Create a directory your_directory with your code and a CMakeList (ex : cansend_function) and adapt it
  • Add your sub-directory into the CMakeList.txt in $HOME/ros2can/ros2can-master/cansend_app

Instructions for compile :

Building your code :

  • Go to cd $HOME/ros2can/ros2can-master/cansend_app
  • Create a build folder and go into mkdir build
  • Execute cmake ..
  • Build with the following command : make install
  • You can find your executable file in $HOME/ros2can/ros2can-master/cansend_app/install/bin

(On Tegra)

Running the code :

  • Copy your executable file where you want and modify the permission with sudo chmod 777 your_executable
  • Run it with ./your_executable

If sending frames to can0 (e) or can1 (f), do

  • sudo ip link set can0 type can bitrate 500000
  • sudo ip link set up can0

Reading frames :

  • Start simulation
  • Frames sent will be displayed in Trace window

Change-Starting-Position-and-Goal.md

On Simulation Computer (Windows)

To change starting point:

Open PreScan GUI and open a PreScan simulation world (for example: W01_BASE_Map --> Simulation --> W01_BASE_Map.pex)

To change the starting position, drag and drop the ego vehicle to the desired position.

To change the goal:

Open the Simulink model and double click on the ego vehicle.

Change Coordinates of the inputs to the ROS Send Goal block.

To get the quaternions, check the following website. On PreScan GUI, the x axis is to the right, the y axis is up and the z axis is coming out of the screen. To set the car's orientation only the Euler angle around z has should be non zero.

Make the new car move following a predefined trajectory (read the PreScan manual about path and trajectories.

To add a car:

Select a new car from the left side (Library Elements) and Place it on the Build Aera.

To add a new path:

Select Inherited Path Definition.

Draw the path manually on road.

Drag the car on the leftmost side of the path.

Now trajecory will be assigned to the car and the path.

Right click on the car and select object configuration. The trajectory and speed profile are shown in the following figure.

Run a simulation


Cloning-the-VM.md

Copying the VM

To restore a virtual machine the content of the hard drive usually named w10.qcow2 must be copied to /var/lib/libvirt/images.

virtio-win-0.1.141.iso needs to be placed in /home/adeye/Downloads.

The xml definition of the machine can then be imported using virsh. Once virsh started use define VM.xml which will create a new VM as sefined in VM.xml.

Make sure that the CPU configuration is good and that the connection with the host and with the internet is working properly.

Connection issues

Cleaning the iptables rules of the host that can cause issue for communicating between host and guest:

sudo iptables -P INPUT ACCEPT
sudo iptables -P FORWARD ACCEPT
sudo iptables -P OUTPUT ACCEPT
sudo iptables -t nat -F
sudo iptables -t mangle -F
sudo iptables -F
sudo iptables -X

To restore iptables:

sudo iptables-restore < save_iptables

To save iptables:

iptables-save > save_iptables

source: https://www.thomas-krenn.com/en/wiki/Saving_Iptables_Firewall_Rules_Permanently


Code-Documentation.md

Documentating the code

The documentation tool used is doxygen.

C++ code

To document the source files tags need to be used in the comments

   /*!
    * \brief
    * \param
    * \return
    * \details 
    * \todo
   */

A class or function should always contain at least the tags brief, param and return (if the function is not a void).

See an example here: https://gits-15.sys.kth.se/AD-EYE/AD-EYE_Core/blob/dev/AD-EYE/ROS_Packages/src/AD-EYE/src/experimentC.cpp#L40

       /*!
        * \brief Constructor
        * \param nh A reference to the ros::NodeHandle initialized in the main function.
        * \details Initializes the node and its components such the as subscriber /current_velocity.
        */
        ExperimentC(ros::NodeHandle nh): ExperimentRecording(nh), nh_(nh), point_cloud_to_occupancy_grid_(nh, true), rate_(20)
        {
            ...
        }

Python Code

Refer to this link: https://www.doxygen.nl/manual/docblocks.html#pythonblocks As for C++, a class or function should always contain at least the tags brief, param and return.

See an example here: https://gits-15.sys.kth.se/AD-EYE/AD-EYE_Core/blob/dev/AD-EYE/ROS_Packages/src/AD-EYE/src/base_map_goal_loop_sender.py#L31

    ##A method for publishing new goals.
    #@param self The object pointer
    #@param index The index (Integer) for the new goal in the list of goals
    def publishNewGoal(self,index):   
        ...

JavaScript code

Use the same template as C++

Generating the documentation

To generate the documenttion navigate to AD-EYE_Core/Documentation and run the command doxygen Doxyfile.

A folder called html should appear. Open index.html to see the documentation.

Setting up your environment

To install doxygen follow instructions, run sudo apt install doxygen on a terminal

The following dependencies might be needed:

sudo apt-get install flex
sudo apt-get install bison

The variable SOURCE_BROWSER allows to show the line number and file where the elements are define if set to YES.

References

-   https://www.doxygen.nl/manual/docblocks.html#pythonblocks
-   https://realpython.com/documenting-python-code/
-   https://github.com/Feneric/doxypypy
-   https://www.doxygen.nl/manual/install.html
-   https://numpydoc.readthedocs.io/en/latest/format.html
-   https://www.youtube.com/watch?v=YxmdCxX9dMk
-   https://www.youtube.com/watch?v=TtRn3HsOm1s

Configure-PC-name-and-user-account.md

Change computer name

To avoid having issues with loosing sudo access to display respect the following step to modify the computer name in Ubuntu 16.04.

First, edit the hostname by typing the following command in the terminal: sudo gedit /etc/hosts. It will prompt you to enter the password. (The name to be change should be on the second line)

Then, edit the hostname by typing the following command in the terminal : sudo gedit /etc/hostname

Finally change the computer name clicking in the top right corner and on About This Computer.

If there is an issue with sudo and graphical applications

The error:

Failed to connect to Mir: Failed to connect to server socket: No such file or directory
Unable to init server: Could not connect: Connection refused
Error: cannot open display: :0

can be solved by the command xhost +. Be aware that there might be an issue with the hostname that led to this issue (check the previous step to change the computer name).

Make a new user in Ubunutu

Creating a new user can be achieved through the GUI. Press the super key and look for user accounts.

Click unlock and add an administrator account. Put a password by clicking on account disabled. Ubuntu might want an elaborate password. It can later be changed using the command sudo passwd adeye.


Next step: Install ROS Kinetic
Back to the overview: Installation

Conventions.md

In order to have a coherent project despite a large number of people working on it, the conventions presented on this page need to be followed.

Coding conventions

Language link
C++ http://wiki.ros.org/CppStyleGuide
Python follow C++ conventions: http://wiki.ros.org/CppStyleGuide
Javascript http://wiki.ros.org/JavaScriptStyleGuide
Matlab https://www.ee.columbia.edu/~marios/matlab/MatlabStyle1p5.pdf (for functions use camelCase)

Naming checklist:

  • file names
  • variable names
  • class names
  • function names

To generate documentation for the code the comments should follow the guidelines under the section Documenting the code here.

Git conventions

Branch Naming conventions

A branch's name should never contain the name of its creator, but should explicitly refers to the work they're used for. You should use the following prefixes:

  • feature/... indicates the new feature you're working on

  • bugfix/... indicates the bug you're trying to fix

  • refactor/... indicates what you're modifying

The name of the branch itself must describe the feature well and follow the snakecase convention (words separated by underscores, _, all written in lowercase). A few examples are listed below:

  • Let's say you are working on a branch where you implement a new functionality for the GUI: to add a panel that represents the linear velocity of the car. Then, a good branch name is feature/linear_velocity_pannel
  • Imagine you want to fix a bug in the detection module classifying objects from camera images. A branch name for this case could be bugfix/camera_object_detector

The titles are short, but descriptive.

Commits

A commit name should be explicit, and never exceed a sentence. It should be phrased in past sense such as "Added trajectory markers".

Folder conventions

Experiments folders

For experiments everything should be in AD-EYE_Core/AD-EYE/Experiments as the following example showing what files are on Git. The two important points are:

  • the folder naming and structure
  • the Prescan experiment names.
AD-EYE_Core/AD-EYE/Experiments/W01_Base_World
├── Mapping
│   ├── Resources
│   │   └── LightMap
│   │       ├── GeneralLight.exr
│   ├── W01_Base_World_cs_hws.mat
│   ├── W01_Base_World_cs.slx
│   └── W01_Base_World.pex
├── OpenSCENARIO
│   ├── Resources
│   │   └── LightMap
│   │       ├── GeneralLight.exr
│   ├── W01_Base_World_cs_hws.mat
│   ├── W01_Base_World_cs.slx
│   └── W01_Base_World.pex
├── Pointcloud_Files
│   ├── pcd10220632.pcd
│   ├── pcd10225742.pcd
│   ├── ...
├── Simulation
│   ├── Resources
│   │   └── LightMap
│   │       ├── GeneralLight.exr
│   ├── W01_Base_World_cs_hws.mat
│   ├── W01_Base_World_cs.slx
│   └── W01_Base_World.pex
└── Vector_Map_Files
    ├── dtlane.csv
    ├── lane.csv
    ├── line.csv
    ├── node.csv
    ├── point.csv
    ├── roadedge.csv
    ├── signaldata.csv
    ├── stopline.csv
    ├── vector.csv
    └── whiteline.csv

Create-an-S-Function.md

An S-function is a computer language description of a Simulink block written in MATLAB, C, C++, or Fortran. S-functions are dynamically linked subroutines that the MATLAB execution engine can automatically load and execute.

S-functions define how a block works during different parts of simulation, such as initialization, update, derivatives, outputs and termination. In every step of a simulation, a method is invoked by the simulation engine to fulfill a specific task.

Creating an S function

To create an S function, open a blank Simulink model and add the S-Function Builder block from the Library Browser.

The S-Function Builder integrates new or existing C or C++ code and creates a C MEX S-function from specifications you provide. Instances of the S-Function Builder block also serve as wrappers for generated S-functions in Simulink models. When simulating a model containing instances of an S-Function Builder block, Simulink software invokes the generated S-function in order to call your C or C++ code in the instance's mdlStart, mdlOutputs, mdlDerivatives, mdlUpdate and mdlTerminate methods.

Double click to open the S-Function Builder-

Specify a name for your S-Function that explains what it does, for example "sim_time_calc" for an S function that calculates the wall time of a simulation. Note that when you name an S-function, all functions in the S-Function Builder editor changes and the S-function name becomes a prefix to all wrapper functions.

The code is written in the S-Function Builder under the different tabs such as Libraries, Start, Outputs and Terminate and then Build generates the code that we want to run at different stages of simulation in the different functions.

Explaining the different parts of an S-Function using the example of an S-Function that calculates the wall time of a simulation

1. Libraries

Here you can enter any library/object or source files used by the S-Function. You can also specify any necessary include files or define global variables that will be used in the Start, Output and Terminate methods.

In our example, we specify the header files for printing on Windows and Linux as well as the global variables that will be used in the other wrapper functions.

2. Start

This tab can be used for one-time initialization and memory allocation.

In our example, we obtain the simulation start time using the gettimeofday() function. We also set the values of the variables that should be set at the start of the simulation such as the total duration, previous step duration, maximum and minimum step duration.

3. Outputs

Here you enter the C-code that does the computation or call your algorithm.

In our example, we obtain the simulation end time using gettimeofday() function and then calculate the simulation time by subtracting end time and start time. We also calculate the step duration between 2 iterations and also the maximum and minimum step duration.

4. Terminate

This section is used to perform operations required at the end of the simulation.

In our example, we print the different values that we have calculated in the Outputs tab.

5. Build

Once you write the code in the different tabs you can generate the code in the wrapper files by clicking on Build on the top right corner. Build generates the following files-

  1. s_function_name.c

This file consists of the different mdl functions such as mdlStart(), mdlOutputs() and mdlTerminate(). Inside these mdl functions the wrapper functions such as s_function_name_Start_wrapper(), s_function_name_Outputs_wrapper() and s_function_name_Terminate_wrapper() are called.

  1. s_function_name_wrapper.c

This files contains the code generated from the S-Function Builder. Hence, it also contains the function definitions of s_function_name_Start_wrapper(), s_function_name_Outputs_wrapper() and s_function_name_Terminate_wrapper() functions.

  1. s_function_name.tlc

This file consists of the functions that will be used during code generation.

Adding an S_Function to your Experiment

To do this, you simply have to copy paste the S-Function from the model that contains the S-Function into your model. When you run your simulation, the S-Function will also run during the different stages that it has been called in.

Code generation

To generate code and run the S-Function on Linux, follow this link.

If you face any issues regarding Prescanrun, refer to this link


Create-a-point-cloud-map.md

PreScan GUI

The process of creating a point cloud map is done in Simulink on windows computer, without using ROS on Ubuntu. We will make the ego vehicle follow a predefined trajectory for the mapping, thus not needing the autonomous driving intelligence and ROS.

To create a point cloud map, open the respective world pex file from the within PreScan (eg:C:\Users\adeye\AD-EYE_Core\AD-EYE\Experiments\W06_Tiny_Town\Simulation\W06_Tiny_Town.pex) select the option save experiment as and leave the options for name and location as is. A folder will be created in the respective world folder (eg:C:\Users\adeye\AD-EYE_Core\AD-EYE\Experiments\W06_Tiny_Town) with the same name as (eg:W06_Tiny_Town). Then remove the existing mapping folder and rename the newly created folder(W06_Tiny_Town) as mapping folder.

The mapping world must identical to its simulation counterpart without the mobile actor (cars, truck, humans...). Open it in PreScan and remove all the moving actors (this can be done by selecting them in the experiment components list under the actors category and pressing the delete key).

Under the actors look for the ego vehicle (Eg: In this case BMW_X5_SUV_1). In the sensor section, remove all the sensors but the Pointcloud ones.

The Pointcloud sensors must have their output modes set to WorldPosition. To do this right click on the PointCloud sensor and open the object configuration.

The following windows should appear and the World position mode can be selected in the Basic tab:

Click ok and make sure that in the property editor on the right side the field Angles is set to False (see below).

The (ego) vehicle should follow a path along which it will save measurements as exemplified in the following picture:

The path should be cover as much of the map as possible. Overlaps are not an issue but unnecessarily make the point cloud map bigger. The frequency of the simulation and the speed of the ego vehicle should be set so that there is roughly one measurement each 8 meters (the easiest way to do so is to set the speed profile to constant at 8 m/s and the frame rate of the sensors to 1Hz).

Simulink

In Simulink, the output of the sensors has to be connected with the point cloud map creator block as seen is the next image. The block assumes that four sensors are being used to map. Make sure to set the end of the simulation in Simulink according to the time required to drive the path in PreScan.

The pcd files will be saved in the experiment directory.

Make sure that the pcd files are not containing only zero. This issue can happen if the property angles of the Pointcloud sensor is set to True in PreScan while the output mode is set to World Position. If that happens change the output mode to Range and back to World position through the object configuration window (not the property editor!).


Create-a-vector-map.md

Execution

Modify the variable PEX_FILE_LOCATION in the file main.py (Use the Ubuntu system) in AD-EYE_Core/Pex_Data_Extraction/pex2csv to add the path to the pex file that should be transformed into a vector map. Set the variable OnlyVisualisation on False.

cd ~/AD-EYE_Core/Pex_Data_Extraction/pex2csv
python3 main.py

A window should pop up and show the generated map. Check that the map is coherent. If not, there might be some vector mapping rules that were not respected (see this page).

If everything is good, the vector maps files are in the folder defined by the variable VECTORMAP_FILES_FOLDER (by default, AD-EYE_Core/Pex_Data_Extraction/pex2csv/csv) and should be moved to /AD-EYE_Core/AD-EYE/Experiments/name_of_the_world/Vector_Map_Files.

Visualisation of an existing vector map

Modify the variable VECTORMAP_FILES_FOLDER in the file main.py in AD-EYE_Core/Pex_Data_Extraction/pex2csv to add the path to the folder where the vector map you want to visualized is stored. Set the variable OnlyVisualisation on True.

cd ~/AD-EYE_Core/Pex_Data_Extraction/pex2csv
python3 main.py

Documentation (optional)

Run the following commands to generate the documentation:

cd ~/AD-EYE_Core/Pex_Data_Extraction/doc
make html

The documentation is saved in ~/Pex_Data_Extraction/doc/_build.


Create-map-from-OpenStreetMap.md

Retrieve 3D shape of buildings

In order to get real world data, we use OpenStreetMap.

There are some tools that can extract 3D building data from OpenStreetMap. For example here, using https://cadmapper.com/pro/home

Make sure that the include 3D buildings option (bottom left) in the screenshot is selected.

Then, the model can be imported into Sketchup, and exported as dae.

We end up here with a .dae file.

Retrieve the road network definition

Here again, we use OpenStreetMap. The extraction can be done directly through the OpenStreetMap website.

There is an Export option on the top bar that allows you to select an area, then export it into a .osm file

We end up here with an .osm file.

Note : If possible, doing both extractions (building shape and road networks) at the same time with the same tool can be better and may avoid the placement step. (Not tested)

Create the Prescan Experiment

Files needed :

After previous steps, you should have the following

  • A .dae file :
    The .dae file contains the 3D shapes of the buildings. It has to be imported in Prescan as a "User Library Element". (Explained after)

  • At least one .osm file :
    You can use multiple .osm files if they come from the same area (e.g. if you want to add some missing roads afterwards).

Import the 3D building shapes as a UL Element

In a Prescan experiment, you can use "User Library Elements". They are created by importing a 3D model file (like the .dae one).
A UL Element is created only locally on the computer, so if the experiment is opened on another computer, it will have to be created again (with the same name and in the same folder. See after).

To create a UL Element, go to
Tools -> User Library Elements Wizard...
Then, add a new folder if you don't have any and click on New Element.

Then, you can leave everything as default, you just need to specify a name (on step 2) and the .dae file on step 3.

The new UL Element can be drag and dropped like every other elements in the Prescan Library.

Import the road network from OpenStreetMap

Importing .osm files is possible with Prescan (just with File -> Import -> OpenStreeMap... and select your file).

However, the roads which are produced will be full of errors. So, here, the painfull part begins...

Tips

  • Prescan creates lot of flex roads but uses too much definition points, lots of them can be removed.
  • Also, sometimes crosses ends are too short and it produce errors. It can be solved just by increasing it a little bit.
    (Right click on one end, next to the connection point, Edit road end..., then increase the Side road length value)
  • When you edit the road network be careful to not move big parts of the map (pay attention if you change orientation of a road that you didn't disconnected before).
  • Sometimes, you cannot edit a flex road directly, you have to disconnect one end first.

Lots of modifications has to be done patiently, good luck !

Finally

When your road network is consistent and doesn't have errors anymore, it is time to put the building shapes in place. You have to try to find the best place and orientation for the model (build the map and watch the top view on the 3D viewer).
Then, you can adjust the roads so it fits perfectly (sometimes, previous modifications may have made the roads not accurate with the real path).


Creating-an-OpenSCENARIO-file-with-Variations.md

OpenSCENARIO is a standard format to describe dynamic content in driving situations.

A description of the format can be found here: ASAM OpenSCENARIO user guide.

Step by step description

  • Create or Reuse and OpenSCENARIO file (.xosc) and change the name of the actors accordingly to the actors in the Prescan map.

  • Change the name of the ego vehicle to 'Ego'.

  • Change the initial positions of the actors.

  • Change the weather conditions.

  • Change the movement functions (step, sinusoidal, linear) and/or conditions (Distance, AfterTermination,TimeHeadWay).

  • Add an array to the parameter you want in the form [x,y,z]. x is the first value of the array, z the end, y the step value.

  • Save the file in AD-EYE/OpenSCENARIO/OpenSCENARIO_experiments with the same name as the Prescan map.

You can get additional information about how to change a scenario here : https://www.asam.net/index.php?eID=dumpFile&t=f&f=3496&token=df4fdaf41a8463e585495001cc3db3298b57d426#_scenario_creation

Values impacting the test automation parameters

Some parameters can be specified in the OpenScenario file but will only generate Excel table that are configuration of the test automation (stored in AD-EYE/TA/Configurations). Refer to ExperimentA.xosc in AD-EYE/OpenSCENARIO/OpenSCENARIO_experiments for an example.

Parameter Representation
speed {start,step,end}
rain intensity {value;value,...}
reflectivity {start,step,end}

Creating-map-for-OpenSCENARIO.md

Create map in Prescan

  • Create the map in Prescan that you want to use with the actors you want. Add cameras to the actors to let them be visible in Simulink (see Wiki -> Execution-> Run a simulation ).

  • Save the Prescan map in AD-EYE/Experiments and rename the main folder 'Simulation'.

  • Open the simulink (.slx) file. Make sure that the name of the cameras of the ego vehicle are called CameraSensor_1_CM_Demux and CameraSensor_2_CM_Demux.

  • Change the ROS Message length accordingly (see Wiki -> Execution-> Run a simulation). To do this ROS blocks need to be in the Simulink model and active. They can be commented out afterwards.

  • Change the maximum simulation time to your preference.

  • Toggle on 'Save results' in DISPLAY: Color CameraSensor block in Simulink if yo prefer to save the data.

  • Save the Simulink file.


Custom-ROS-messages-in-Matlab.md

Installing the Matlab add-on

Install the add-on ROS Toolbox interface for ROS Custom Messages in Matlab. That add-on enables the creation of custom ROS messages in Matlab.

Creating the message package

The package containing the messages needs to be created with the same folder name as the name present in package.xml. See adeye_msgs as an example.

The package.xml file must contain:

<buildtool_depend>catkin</buildtool_depend>
<build_depend>message_generation</build_depend>
<run_depend>message_runtime</run_depend>

With eventually, based on the message dependencies:

<build_depend>std_msgs</build_depend>
<build_depend>geometry_msgs</build_depend>

Generating the Matlab representations of the messages

Use the commands rosgenmsg('path_to_msg_package') where path_to_msg_package is the path to the folder one level above the message package (C:\Users\adeye\AD-EYE_Core\AD-EYE\ROS_Packages\src in our case).

Setting up Matlab to find the generated messages

Then add the generated folder to the Matlab path:

addpath('C:\Users\adeye\AD-EYE_Core\AD-EYE\ROS_Packages\src\matlab_gen\msggen')
savepath

Add the jar files to the java path:

cd(prefdir)
edit('javaclasspath.txt')

And add the path to the jar file representing the message (C:\Users\adeye\AD-EYE_Core\AD-EYE\ROS_Packages\src\matlab_gen\jar\adeye_msgs-0.0.1.jar for adeye_msgs)

Source: https://se.mathworks.com/matlabcentral/answers/634969-why-do-i-get-the-error-cannot-find-a-matlab-message-class-for-type-package-type-for-my-custom


Detection.md

Detection diagram

The white nodes are part of Autoware, while the blue ones are custom nodes that are included in the Adeye package.

Camera extrinsics - Camera frame

This frame describes the position and orientation of the camera. The following rules must be considered:

  • The origin of the frame should be the optical center of the camera
  • +x should point to the right in the image
  • +y should point down in the image
  • +z should point into the plane of the image

In the current implementation, a static transformation between base_link and the camera frame have been added to the detection launcher:

<node pkg="tf" type="static_transform_publisher" name="base_link_to_camera" args="2.0 0.0 1.32 -0.5 0.5 -0.5 0.5 base_link camera 10"/>


Example of the calculation using Matlab:

quat_frames = eul2quat([-pi()/2, pi()/2,0], "xyz");
quat_orientation = eul2quat([-pi()/12,-pi()/12,0],"zyx");
quat_final = quatmultiply(quat_orientation,quat_frames);

The first quaternion is the constant relation between PreScan frame and Autoware frame for the cameras. The second quaternion represents the rotation in PreScan. In this example, the camera is rotated -15 degrees in both z and y frames.

Camera intrinsics - CameraInfo message

Nodes description

/camera_info_publisher

Publish the camera intrinsics.

/vision_ssd_detect

Reads image data from cameras, and provides image-based object detection capabilities.

/lidar_euclidean_cluster_detect

Reads point cloud data from 3D laser scanners, and provides LiDAR-based object detection capabilities.

/range_vision_fusion

Combines the results from lidar_detector and image_tracker. The class information identified by image_detector is added to the clusters of point cloud detected by lidar_detector. (Check range_vision_fusion/README, source 2)

/imm_ukf_pda_track

Tracks the motion of objects detected and identified by the above packages.

/tracked_objects_adapter

Adapts the message containing the information about the tracked objects to the requirements of the open planner. Adds 1 to the id of every tracked object to be sure that it is not considered part of the own car and transform the coordinates to the global frame.

Source

[1]https://github.com/CPFL/Autoware/wiki/Overview

[2]https://github.com/CPFL/Autoware/blob/master/ros/src/computing/perception/detection/fusion_tools/packages/range_vision_fusion/README.md


Editing-the-Wiki.md

Formatting

Refer to these pages for information about formatting:

Do not hesitate to look how the pages of the wiki were mades.

Adding picture to the wiki

The wiki is a repository and can be cloned using git clone https://gits-15.sys.kth.se/AD-EYE/AD-EYE_Core.wiki.git.

The wiki could not be cloned inside the main repository!

Once cloned, a folder should be created inside AD-EYE_Core.wiki/Images with the same name as the page the pictures will be added to (check the name of the page in the address bar when on the page the picture will be added. For example, https://gits-15.sys.kth.se/AD-EYE/AD-EYE_Core/wiki/Autonomous-Driving-Intelligence-(ADI) leads to the folder Autonomous-Driving-Intelligence-(ADI))

Add your pictures in that folder, add the changes on git, commit and push.The link to the pictures can now be added in the page. Using html allows to set the size of the picture with the following code:

<p align="center">
</p>

Error_1_VcPP.md

Issue Seems to be related to conflicting python installations Workaround: Open command line, right way: remove the conflicting python installation from path easy way: run the following command: set PATH="" cd to your PreScan directory open prescanstart.exe from the same terminal


Extract-GPS-data-from-car-data.md

On the folder downloads, there is a folder name gnss. Enter to this folder.

On the Matlab code name: extract_GPS_data, change the variable name 'Path_to_file' and enter the path to file pose.txt in the forlder gnss of the data base. Path_to_file='C:\Users\adeye\Desktop\rec1\data\AutoDrive\recording\2019_9_10\rec1\gnss\pose.txt';

After run the Matlab code. You finally have on the variable cart_data`x,y and z cartesian coordonates of all points on the file. Cart_data is a matrix. The first colomn is x coordonate, the second y coordonate and the third z coordonate. One line correspond to catesian coordonate of one point.


Extraction-of-3d-models-from-Cities-Skylines-game-and-incorporation-in-PreScan.md

Summary

Extracting 3D models

Directly from Cities: Skylines game

  • Open the game.

  • About ModTools: For all information and for installation you can visit here → https://steamcommunity.com/sharedfiles/filedetails/?id=450877484. It is an in-game tool that we use here to extract 3D models and textures.

  • Make sure ModTools is installed and enabled.

  • Then load the map, click on a building (or a vehicule) and click on Dump Assets.

The 3D object will be dumped in this directory. The object's format is Wavefront (.obj).

Notes:

  • Sometimes the texture's colors you see in-game won't be the same as the dumped one.
  • Sometimes the .obj is broken and can not be imported in Blender (even though the .obj can be opened in 3D viewer of Windows).

With Asset Studio

This solution works for every Unity games. You have to download Asset Studio. https://github.com/Perfare/AssetStudio/releases

  • Once downloaded, go to File → Load Folder → Go to this directory : C:\Program Files (x86)\Steam\steamapps\common\Cities_Skylines And then load the folder called Cities_Data.

  • The file downloaded, go to assets, you can vizualize the 3D objects (In “mesh” category), choose one, right click on it and choose “Go to scene hierarchy “.

  • Now you can tick all the cases related with this object, they have almost the same name (in order to import also the textures, but often there will be only one to tick).

  • And then Model → Export selected objects (split).

The export format is .fbx and the textures are already applied so the 3D model is ready to be imported in PreScan. If it is not the case, you will have to put the textures manually.

Adding textures in Blender

If you extracted the 3D model from the game directly (or it didn't work from Asset Studio), you have to put the textures manually.

  • Go to File → Import → Select the format you want. With Modtools, the file will be a Wavefront (.obj). With AssetStudio, it will be FBX (.fbx).

  • You will often have the choice between 2 files, select the one where there is not “LOD” in its name.

  • Then, go to UV Editing menu, your screen will look like this:

  • On the left screen, click on “Open”, then go to the directory where your 3D object was, the textures are also at the same directory, I recommand to do a folder for each 3D model in order to not get lost.

  • Select the image (like here, you will have a lot of images, it will be generally the heaviest one, in this example it is 1.3 MiB)

  • Select the 3D model (click on it) and then push TAB (these step is for visualizing if the meshes is applied correctly to the image)

  • Now you can go to the Shading menu.

  • On the bottom screen click on Add → Texture → Image Texture, it will create a new node that you will have to connect like this:

  • Then you have to click on the icon on the left of “New” to browse to the image you imported before.

Quicker way:

You can skip the UV editing, in our case the mesh will always be correctly applied to the texture, if you exported with Modtools. You can go directly to the Shading menu and instead of browsing the image you can directly click on “Open” and browse to the image.

  • Once done you can export the COLLADA (.dae) format in order to import it on Prescan

Importing on PreScan

With the Model Preparation Tool

  • Open the “Model Preparation Tool”.

  • Go to File → New.

  • Import the file (the file must be in COLLADA (.dae) format).

  • On the right, click on “Canvas Data”, you have to fill the editor data : Enter a category, the main tab, the subtab, the model name (very important, it is the name that will appear on PreScan) and a description.

  • Finally, you have to save your work here C:\Users\Public\Documents\Prescan\GenericModels or here C:\Users\adeye\AD-EYE_Core\Prescan_models (If PreScan path was modified) It is recommanded to do the following structure folder.

Note: Once the model saved, PreScan has to be launched again to take into account the new models.

With User Library Elements Wizards

  • This is a most straightforward method to add 3D models.

  • Open PreScan → Tools → User Library Elements Wizards a window will open. From there, folders can be added or removed. When a a folder is selected, an element can be added to it.

  • Then, browse to the 3D model and click next to every step, unless some modifications is needed.


Fault-Injection.md

Sensor disturbances

The fault injection under the form of sensor disturbances takes place in the Simulink model. The blocks that have a light blue color perform this task. They have a On/Off input (or On/Random/Rain for the Lidars).

This input is connected to a ROS Listen Fault Injection block that allows to control the disturbances from ROS. The input to the ROS Listen Fault Injection are the default values that will be overridden if a message is received on the fault injection topics.

For Test Automation, to control disturbances, this default values needs to be set by the Test Automation and no message should be sent on the fault injection topics.

Topics for disturbances control

For the following topics the message type is alway Float64MultiArray (http://docs.ros.org/en/jade/api/std_msgs/html/msg/Float64MultiArray.html). The content of these arrays is detailed in the next section, Parameters arrays descriptions.

Sensor Fault Injection Topic
Lidar 1 /fault_injection/lidar1
Lidar 2 /fault_injection/lidar2
Lidar 3 /fault_injection/lidar3
Lidar 4 /fault_injection/lidar4
GNSS /fault_injection/gnss
Camera 1 /fault_injection/camera1
Camera 2 /fault_injection/camera2
TL camera /fault_injection/tl_camera
Radar /fault_injection/radar

Parameters arrays descriptions

Lidar

Index Parameter Possible Values Default Value Description
0 State 0: Off, 1: Random, 2: Rain Model Which disturbance model will be used: Random is Gaussian along the range and the polar angle, Rain Models follows specific equations.
1 random/range_variance >0 0.0001 Variance of Gaussian noise on the range field (only active if state is Random)
2 random/theta_variance >0 0.0001 Variance of Gaussian noise on the polar angle field (only active if state is Random)
3 rain/rain_intensity >0 7.5 Rain intensity in mm/h (only active if state is Rain model)
4 rain/a 0.01 a (only active if state is Rain model)
5 rain/b 0.6 b (only active if state is Rain model)
6 rain/reflectivity 0<reflectivity<1 0.9 Objects reflectivity (only active if state is Rain model)
7 rain/max_range >0 100 Maximum range of the Lidar sensor in nominal conditions (only active if state is Rain model)

GNSS

Index Parameter Possible Values Default Value Description
0 State 0: Off, 1: Gaussian Noise Which disturbance model will be used: Gaussian is along x, y and z
1 noise_variance >0 1 Variance of the Gaussian along x, y and z (only active if state is Gaussian)

Radar

Index Parameter Possible Values Default Value Description
0 State 0: Off, 1: Gaussian Noise Which disturbance model will be used: Gaussian is along the three spherical coordinates
2 range_variance >0 1 Variance of the Gaussian along the range field (only active if state is Gaussian)
3 theta_variance >0 1 Variance of the Gaussian along the polar angle (only active if state is Gaussian)
4 phi_variance >0 1 Variance of the Gaussian along the azimuth angle (only active if state is Gaussian)

Camera

Index Parameter Possible Values Default Value Description
0 State 0: Off, 1: Colored Patch Which disturbance model will be used: Colored patch replaces all pixels in a specified rectangle by the specified color
1 patch/start_x integer, less than image x dimension 200 Start of the rectangle along x (only active if state is Colored Patch)
2 path/start_y integer, less than image y dimension 100 Start of the rectangle along y (only active if state is Colored Patch)
3 patch/size_x integer, less than image x dimension - start_x 100 Size of the rectangle along x, currently inactive (only active if state is Colored Patch)
4 patch/size_y integer, less than image y dimension - start_y 150 Size of the rectangle along y, currently inactive (only active if state is Colored Patch)
5 patch/R integer, <255 0 Color of the rectangle, Red channel (only active if state is Colored Patch)
6 patch/G integer, <255 0 Color of the rectangle, Green channel (only active if state is Colored Patch)
7 patch/B integer, <255 0 Color of the rectangle, Blue channel (only active if state is Colored Patch)

Killing nodes


Fault-monitors.md

Purpose

The purpose of the fault monitors is to detect and monitor faults. A common interface has been implemented so that no matter the fault monitored, the usage is the same.

Implementation

To avoid flickering output, the fault monitor follows the logics of a counter that:

  • is incremented when a test is failed
  • is decremented when a test succeeds When the counter reaches a high threshold, what is monitored is considered as faulty. When the counter reaches a low threshold, what is monitored is considered as non faulty. In between the high and low threshold, the state remains the same as the previous iteration.

Class Diagram


Files-Walkthrough.md

File by File Walkthrough of the VectorMapper

Utils.py

Circle_from_p1p2r(p1,p2,r)

Give the position of the center of the circle going through P1 and P2 with a radius equal to r

Radius_of_circle(p1,p2,angle)

Give the radius of a circle that goes thought P1and P2 with the angle between P1Center and P2Center being equal to the angle

Dist(p1,p2)

Return the distance between P1 and P2

Intersection_Lines(L1, L2)

Find the point of intersection between two lines represented by a tab of (x,y) points (L1 and L2) If the lines are parallel, return a middle point

Intersection_Circle(C1, C2)

Find the point(s) of intersection between two circles represented by a tab containing the coordinate of the center of the circle and its radius

Path.py

Define useful geometry in order to build different available road types. So basically, define the building block (Bend, Curve, and Straight) that will be used to define every RT (Roundabout, Straight Road and so on) In order to do that, the first class define is an iterable class which will define the actual path (with points!) of the object that will be defined later

  • Path

Define the actual path object which is iterable

-Init

-Next

Those two define the iterable class

-Getstart/Getend

Provide the start/end point of the path (the eval function used in those functions are defined for each path type cf 3 next classes)

  • Bend(Path)

Define a path which is a part of a circle

-Init

x0, y0: Starting Coordinate of the Bend Path

A0: Global heading of the ST point

da: Heading at the endpoint relative to the curve heading

r: distance of the curve from the center of the circle used to rep the curve

Path.init(self, 1/r, np.ad(da)) => init the path with the right values

-Eval(t)

Return the coordinate of the point at t

  • Curve(Path)

Define a path which is represented by a Bezier Curve

-Init

xs, ys: Tab of 4 points that define the Bezier curve

Offset: The offset allow to offset the curve according to your liking (similar to the b in ax+b)

c: Points defining the curve itself

Path.init(self, 1/len( c ), 1) => init the path with the right values

-Dpdt(t)

Calculate the next iteration

-eval(t)

Return the coord of the point at t

  • Straight(Path)

Define a path which is represented by a Straight line

-Init

x0, y0: Starting Coordinate of the Bend Path

h: Global Heading of the road

Path.init(self, 1, 0),l => init the path with the right values

-eval(t)

Return the coord of the point at t

Road.py

Define all the road type (RT) that made the simulation (Straight Road, Bend road, X crossing and so on) using the geometry define in path.py Here how this module shapes up: First we define a class name Road. We do this because every RT has some things in common (such as a centerline, lanes, speed limits, edges and so on) Then using this class, we build each RT as a class that uses the Road class as an object and define the rest of things that are unique to the specific RT (For instance, for Roundabout, you’ll see the mention of exit lanes)

  • Road

So just like class Path, this class define an object road that will be used by each RT class

-Init

ID: String that ref the RT (those ID are defined by prescan, cf List of ID)

c: Tab of point OR Path Object that defines the centerline of the Road

e1/e2: Tab of point OR Path Object that defines the edges of the Road

l: Tab of lanes (so it’s basically a tab of tab of points representing the lanes of the roads)

SpeedLimit/RefSpeed: Define the speed limit of the road and the speed at which you should go (ref speed)

-Getstart/getend

Give the centers point of the beginning /ending of the road

  • BendRoad(Road)

This class define the RT bend road which is a turn that can be represented by a part of a circle

-Init

ID: Give the ID of BendRoad to feed it to the init method of Road Class

x0, y0: Starting point (STP) of the road (center of the road)

l: Tab of lanes (so it’s basically a tab of tab of points representing the lanes of the roads)

h: global heading of the road at the STP

rh: Heading relative of the end of the road

lw: lane width

nb_of_lanes: nb of lanes in total

nb_of_lanes_in_x_dir : nb of lanes going in the x-direction

Init fill up e1/e2 / c / l of the Road object with the Bend class define in path.py

  • CurvedRoad(Road)

This class define the RT curve road with is a road define by a Bezier curve

-Init

ID: Give the ID of BendRoad to feed it to the init method of Road Class

x0, y0: Starting point (STP) of the road (center of the road)

l: Tab of lanes (so it’s basically a tab of tab of points representing the lanes of the roads)

h: global heading of the road at the STP

rh: Heading relative of the end of the road

lw: lane width

nb_of_lanes: nb of lanes in total

nb_of_lanes_in_x_dir : nb of lanes going in the x-direction

cp1: Represent the distance between the first control point (P1) to the STP w/ an angle between those two points of h

cp2: Represent the distance between the snd control point (P2) to the STP w/ an angle between those two points of rh

dx/dy : offset endpoint / STP

  • StraightRoad(Road)

This class defines the RT StraightRoad which is a ... Straight road!

-Init

ID: Give the ID of StraightRoad to feed it to the init method of Road Class

x0,y0: Starting point (STP) of the road (center of the road)

l: Tab of lanes (so it’s basically a tab of tab of points representing the lanes of the roads)

h: global heading of the road at the STP

lw: lane width

nb_of_lanes: nb of lanes in total

nb_of_lanes_in_x_dir : nb of lanes going in the x-direction

Init fill up e1/e2 / c / l of the Road object with the Bend class define in path.py

  • AdapterRoad(Road)

This class define the RT AdapterRoad which is a road use to add/remove a lane

-Init

ID: Give the ID of StraightRoad to feed it to the init method of Road Class

x0,y0: Starting point (STP) of the road (center of the road)

l: Tab of lanes (so it’s basically a tab of tab of points representing the lanes of the roads)

h: global heading of the road at the STP

lw: lane width

nb_of_lanes: nb of lanes in total

nb_of_lanes_in_x_dir: nb of lanes going in the x-direction

Init fill up e1/e2 / c / l of the Road object with the Bend class define in path.py

Parse.py

The Parse module takes everything useful that define the Road network created in Prescan, and fill up two lists: a list called Road, which contains everything related to roads that compose the Road Network, and a list called StaticalObject that contains only information related to traffic light, at the time of writing. Those two lists are filled up using two function called get_staticalobject and get_roads

Let's first study get_roads get_roas call others function get_X with X being a Road Type: Those get_X functions are the one that goe and take the relevant information concerning the road being added to the road list that is in the road network. Most Road type share parameters in common, such as General Heading of the Road, number of lanes, speed limit of the road and so on. So for most road type, the get_X has no notable feature. But for some, such as Roundabout, X and Y crossing, and Straight Road, some feature should be discussed further.

Roundabout

For Roundabout, in addition to all the basic things, you'll find for every Road type, you have the heading and number of lanes for each crosssection. Moreover, you'll find that the parse also outputs a list called TabPointCon, which basically gives out the middle point of each crosssection's beginning.

X/Y Crossing

Much like Roundabout, you'll find that the get_X/Ycrossing function return heading and the number of lanes per crosssection. But you'll also find that the length of each crosssection is outputted as well: This lead to the final difference: stop lines for each crosssection are directly calculated here, in a way that supports multilane stop line. This means that if your X crossing, for instance, has 3 lanes with are concerned by the stop line, only one stop line will be created, and that stop line will be created across the 3 lanes. So for X/Y crossing, you don't have the usual one lane = one stop line

Straight Road

The only main difference with Straight Road is that multi-lane support for traffics light for attempted. You'll see that in the Stop line tab, the last int of the tab is not 1 but the number of lane on the opposite of x-direction.

Preproc.py

The Preproc Module has 2 main classes: The RoadProcessor and the StaticalObjectProcessor

Those two classes work in the same way, so we'll only have a detailed view of the RoadProcessor class.

As you can see, the RoadProccesor class has 5 Lists as variables: Road, Lane, Stopline, Edges, and Centers.

Edges, Centers, and Lane will be filled by an object Lane, which is a class (with a confusing name) used to describe Edges, Centers and Lanes by describing those as lists of points and some other information (cf image)

Roads is a list filled by Road define using the Road.py module: This list basically contains the Road Network we define while creating the simulated world in Prescan.

Stopline is a list of 3 points (start of the stop line, end of the stop line and middle point of the stop line) and an integer giving the number of lanes on the road on which the stopline is (usefull for traffic light)

The road list is an input given by the parse module. All the other lists are filled by the __create_lane function, which called every __create_RoadType functions that use two functions: __get_RoadType and __add_RoadType

Vmap.py


Frames-Description.md

Full frame tree (with NDT matching)

Nominal channel frames

Frame Description
world Global frame of reference
map Global frame used for the vector map and the point cloud map.
gps GNSS sensor frame.
base_link Car's frame positioned at the center point between the two back wheels.
radar Radar sensor frame.
velodyne Lidar sensor frame.
camera_1 Camera 1 sensor frame.
camera_2 Camera 2 sensor frame.
tl_camera Traffic light camera sensor frame.

With NDT matching

The ndt_matching nodes publishes the transform from map to base_link. This node is started in my_localization.launch.

See the picture above for the full frame tree with NDT matching.

With Fake Localization

The fake_localizer nodes publishes the transform from map to base_link under the name ground_truth_localizer. This node is started in my_fake_localization.launch.

The following picture shows the full frame tree using fake localization.

Safety channel frames

The safety channel defines two frames descried in the table below.

Frame Description
SSMP_map Global frame used for safety channel grid map.
SSMP_base_link Car frame according to the the safety channel.

Function-Walkthrough.md

  • OpenScenarioMod.m: Creates multiple .xosc files when a [x,y,z] (x is the first value of the array, z the end, y the step value.) object is detected. Then creates multiple .xosc files based on this array. Inputs: name_experiment (name of the original .xosc file). Outputs: listOfNames (array with the names of the created .xosc files).

  • API_main.m: Main function which calls the other function to create the OpenSCENARIO dynamics. Inputs: name_ego (the name of the ego vehicle in Prescan), name_experiment (name of the Prescan experiment) and name_experiment_template (name .xosc file). Outputs: -.

  • xml2struct.m: Function which creates a structure from a XML type file. Inputs: XML file. Outputs: structure array.

  • struct2xml.m: Function which creates a XML type file from a structure array. Inputs: structure array. Outputs: XML file.

  • slblocks.m: Function to make a library visible in the Simulink Library Browser. Inputs: -. Outputs: -.

  • delete_files.m: Function which deletes old files which do not have the OpenSCENARIO changes in them. Inputs: folder_name (the name of newly created folder), name_experiment (name of the Prescan experiment). Outputs: -.

  • initialize_actors.m: Function which calls parameter_sweep_initalPositions.m based on the object type. Inputs: models (structure made from .pb file of the Prescan experiment), Struct_OpenSCENARIO (structure made from .xosc file), Struct_pex (structure made from .pex file of the Prescan experiment). Outputs: Struct_OpenSCENARIO, Struct_pex , models.

  • parameter_sweep_initalPositions.m: Function which changed the initial positions of the actors in the .pex file. Inputs: Struct_OpenSCENARIO (structure made from .xosc file), Struct_pex (structure made from .pex file of the Prescan experiment), k (value dependent on the object type in Prescan), i (for loop index) . Outputs: Struct_pex (structure made from .pex file of the Prescan experiment).

  • parameter_sweep_vehicle.m: Function which changes vehicle parameters (not used).

  • parameter_sweep_pedestrian.m: Function which changes pedestrian parameters (not used).

  • parameter_sweep_bicycle.m: Function which changes bicycle parameters (not used).

  • weather_conditions.m: Function which changes the the weather conditions of the Prescan experiment. Inputs: models (structure made from .pb file of the Prescan experiment), Struct_OpenSCENARIO (structure made from .xosc file), Struct_pex (structure made from .pex file of the Prescan experiment). Outputs: Struct_pex , models.

  • trajectory_declaring.m: Function which creates a variable which contains all the trajectory information. Inputs: models (structure made from .pb file of the Prescan experiment), Struct_OpenSCENARIO (structure made from .xosc file). Outputs: trajectory_variable (variable containing trajectory information).

  • initial_velocity_declaring.m: Function which creates a variable which contains all the initial velocity information. Inputs: models (structure made from .pb file of the Prescan experiment), Struct_OpenSCENARIO (structure made from .xosc file). Outputs: Velocity_variable (variable containing initial velocity information).

  • trajectory_counter.m: Function which creates two variable which contain the number of lateral and longitudinal trajectories. Inputs: models (structure made from .pb file of the Prescan experiment), Struct_OpenSCENARIO (structure made from .xosc file), trajectory_variable (variable containing trajectory information). Outputs: Lateral_events (number of lateral trajectories), Longitudinal_event (number of longitudinal trajectories).

  • simulink_ego.m: Function which adds the main ROS blocks to the ego vehicle in Simulink from the ROS_lib library, it also adjusts the constant R (rainfall rate, see Prescan manual p. 497). Inputs: name_simulink (the name of the Simulink file in of the Prescan experiment), models (structure made from .pb file of the Prescan experiment), name_ego (the name of the ego vehicle in Prescan), Struct_pex (structure made from .pex file of the Prescan experiment). Outputs: -.

  • trajectory_labels.m: Function which adds labels to SELF_Demux in Simulink per declared actor in the .xosc file. Inputs: Velocity_variable (variable containing initial velocity information), models (structure made from .pb file of the Prescan experiment), name_simulink (the name of the Simulink file in of the Prescan experiment). Outputs: -.

  • initial_velocity_dynamics.m: Function which adds a constant velocity block in Dynamics_Empty in Simulink per declared actor in the .xosc file. Inputs: name_simulink (the name of the Simulink file in of the Prescan experiment), models (structure made from .pb file of the Prescan experiment), Struct_OpenSCENARIO (structure made from .xosc file), Velocity_variable (variable containing initial velocity information). Outputs: -.

  • trajectory_dynamics.m: Function which adds trajectories in Dynamics_Empty in Simulink per declared actor in the .xosc file. Inputs: name_simulink (the name of the Simulink file in of the Prescan experiment), models (structure made from .pb file of the Prescan experiment), Struct_OpenSCENARIO (structure made from .xosc file), trajectory_variable (variable containing trajectory information), Lateral_events (number of lateral trajectories), Longitudinal_event (number of longitudinal trajectories),name_ego (the name of the ego vehicle in Prescan). Outputs: -.


General-Description-VectorMapper.md



VectorMapper

Welcome to the VectorMapper wiki! You'll find here plenty of useful (hopefully) information about the vector mapping code as well as the Vector Map format itself. Note: At the time of writing, there is not a lot of public documentation available on the vector map format You'll have access here to every bit of knowledge the AD-Eye team discovered on its own (in a grueling process, most of the time). But there are still some parts of the Vector Map format that are unknown. If you happen to have knowledge about the VM format that is not in this wiki, feel free to complete it!

The Format

The VectorMap format defines the road network by breaking down lanes and road edges with Points, which are basically x,y,z coordinate. In our case, those points are never more than 1 meter apart. The VectorMap format links a Node to each point. Those Nodes are then used to define what a Lane is: A Lane in the VectorMap format, is mainly defined by two Nodes, the starting Node of the Lane and the ending Node of the Lane. You'll also find the ID of the Lane located before and after the Lane being defined, as well as the a Junction variable, an integer give you information on the Junction (Normal = 0, Left Branching = 1, Right Branching = 2, Left Merging = 3, Right Merging = 4, Composition = 5) and a Span variable, which tells you the length of the Lane. Every Lanes in the Vmap format is also linked to a DTLane, that gives out even more information on the Lane: The Direction, and total distance travel by the car up until this Lane.

But you can define much more than Lanes using the VMap format: Roadedges for instances, are define using Lines. Lines are basically two points, the starting point of the Line and the ending point of the Lines, and just like Lanes, you'll also find out more information in its definition such as the lines before and after the Line of study.

Lines are also used to define Stop lines, which are linked to a Lane (via a variable called LinkID, which is the ID of the Lane that is linked to the Stopline, and can be linked to a Traffic Light with the variable called TLID. If linked to a Traffic Light, TLID will be equal to one of the three ID representing a traffic light (cf below), and in the case of a standalone stop line, TLID would be equal to 0.

Finally, the Traffic Light is described using something called a Vector : 3 points defining the 3 lights (red, orange and green) coordinates. And just like the stop line, a Traffic light need to be linked to a lane (which will always be the same as its own stop line)

To better understand the VectorMap format, you'll find here an example showcasing our previous explanation more visually.

The Code

To generate the CSV file refer to https://gits-15.sys.kth.se/AD-EYE/AD-EYE_Core/wiki/Create-a-vector-map

The first module of the Vector Mapper is the Parse module. The Parse module's job is to go and get the relevant information in the PEX File that defines the Road Network (and the entire Simulated World) created in Prescan. Using the Road and the Static Object Modules, that define each road types/static objects that you can find in Prescan using the geometry and some mathematical functions defined in the Path and the Utils Modules, the Parse module will output a Road list containing every Road defined with the relevant geometrical parameters that make the Road Network of our simulated world.

That List is then given to the PreProc Module, which will break the roads into lanes, edges and so on. Then those are fed to the last module, the Vmap Module, that actually "translate" those lanes and edges and center lines in the VectorMap format and export all of those informations in the CSV files.

Reliance on Prescan

The vector mapper heavily relies on the way Prescan structures its pex file. Prescan updates are likely to modify the way elements of a Prescan world are defined in the pex file. As a result, the vector mapper can stop functionning properly due to a Prescan update so it is something to keep in mind while debugging the vector mapper.

If that happens, the modifications to fix the vector mapper are to be done in parse.py since it is the module reading the pex file.


Git-Commands.md

Command rules

Never use:

  • git add *
  • the force, -f, option with push

Avoid using:

  • git add -a
  • git add .

Always make sure you know what you are staging (git add) and committing (git commit). In case of doubt use git status and, eventually, go back to the git tutorial.

Try to avoid merges. If one is required, a pull has probably been forgotten.

Common git commands/cheatsheet

The most common git commands are represented in the next diagram:

Some other commands can be found here

A tutorial about git can be found Here


Git-LFS.md

Git LFS is a Git extension that allows the storage of big files on a separate repository in a seamless manner. On the main repository those big files are replaced by pointers to the actual files.

Git LFS keeps track of the files mentioned in the .gitattributes file.

Read more about Git LFS

Setting up and using Git LFS

For Ubuntu & Windows:

  1. Download and install the Git LFS command line extension from here
  2. Use git lfs install in your directory

Continue following steps if using Windows:

  1. If any file needs to be added use git lfs track "W10_KTH_overhaul.zip" with the file that LFS will manage
  2. git add .gitattributes
  3. Now add, commit and push normally

Setting up the git hooks for automatic unzipping of the W10 models archive

Git version must be higher or equal to 2.9.

git config core.hooksPath .githooks


Git-statistics.md

Using Hercules

Hercules (https://github.com/src-d/hercules#installation) is a tool that allows to extract statistics about a git repository and to plot them using the labours tool.

Analysing and plotting directly
./hercules --granularity=1 --burndown --languages="python" /home/adeye/adeye_temp/AD-EYE_Core |  labours -m burndown-project

The language can be replace by c++ or by matlab.

Analysing and saving the results
./hercules --granularity=1 --burndown --languages="python" --pb /home/adeye/adeye_temp/AD-EYE_Core >  analysis_results.pb
Combining saved results and saving the combination as pb
./hercules combine results1.pb results2.pb results3.pb > results123.pb
Combining saved results and plotting the combination
./hercules combine results1.pb results2.pb results3.pb | labours -m burndown-project
Plotting results save in pb format
./hercules combine results.pb | labours -m burndown-project
Skipping certain folders/files
./hercules --burndown --first-parent --pb --skip-blacklist --blacklisted-prefixes="prefix to skip" /repo_folder | labours -f pb -m burndown-project
Plotting with better time resolution (must have analysis results comman piped)
labours -m burndown-project --resample=month
labours -m burndown-project --resample=raw #shows commit granularity

Cleaning the repositories from noise

Some commit added a lot of lines of code that were not written but duplicated or added external projects. To remove this noise in the history the history must be rewritten to a clean state.

Note: if the files to be removed still exists in the HEAD, they need to be removed in a new commit before the history can be rewritten

Do not push the cleaned repository

Removing files and folders (will remove just base on name, regardless of path)

BFG is a tool that allows to rewrite history to remove files or folder based on name.

java -jar bfg-1.14.0.jar --delete-files file_name
java -jar bfg-1.14.0.jar --delete-folders folder_name

Removing specific files (specifying the path)

filer-branch command allows to remove specific files or folders from history.

git filter-branch -f --tree-filter 'rm -f path_to_file_or_folder' HEAD

AD-EYE_Core

Folders to remove:
  • mjpeg_server
  • web_video_server
  • robot_gui_bridge
  • GUI_server
  • experiments
  • Data
  • Prescan_models
Files to remove:
  • SSMPset_2018-1-3--11-58-48.csv
  • KTH_3D_KTH3d_20191008.org.dae
  • TemplatePexFile.pex

Pex_Data_Extraction:

Folders to remove:
Files to remove (those files were duplicated from pex2csv folder): (command : git filter-branch -f --tree-filter 'rm -f preproc.py' HEAD)
  • main.py
  • path.py
  • parse.py
  • preproc.py
  • road.py
  • staticalobject.py
  • utils.py
  • vmap.py

Finding what should be removed

Plotting using labours with resampling option can help know when the noisy commit happened.

The following script allows to find the biggest blobs in the history (source).

git rev-list --objects --all |
  git cat-file --batch-check='%(objecttype) %(objectname) %(objectsize) %(rest)' |
  sed -n 's/^blob //p' |
  sort --numeric-sort --key=2 |
  cut -c 1-12,41- |
  $(command -v gnumfmt || echo numfmt) --field=2 --to=iec-i --suffix=B --padding=7 --round=nearest

Using gitinspector

Install with sudo apt-get install gitsinspector.

gitinspector -l -r -m -T -f=",js,c,cpp,h,hpp,py,m" --format=htmlembedded > gitinspector_page.htm

Using gitstats

Insstall with sudo apt-get install gitstats.

gitstats git_directory output_directory

Glossary.md

I-PDU

Interaction Layer Protocol Data Unit. Collection of messages for transfer between nodes in a network. At the sending node the Interaction Layer (IL) is responsible for packing messages into an I-PDU and then sending it to the Data Link Layer (DLL) for transmission. At the receiving node the DLL passes each I-PDU to the IL which then unpacks the messages sending their contents to the application.
Source: https://automotive.wiki/index.php/I-PDU


Home.md

AD-EYE

Welcome to the AD-EYE project documentation repository.

AD-EYE is a platform for Automated Driving that started in the Mechatronics Unit at KTH, primarily for the purposes of functional safety.

The main building blocks behind the software are:

  1. ROS (middleware)
  2. Prescan (car simulator)
  3. Autoware (nominal channel of the autonomous driving stack)

This repository is a collaborative effort from multiple people who worked in the project at KTH.

NOTE: Pay attention that not all information in this wiki is up to date. If you find any mispelling, broken links or new ways/methods to solve a problem, please raise an issue or pull request for further discussion &/or analysis. Make sure to follow the provided guidelines [ADD WIKI GUIDELINES HERE].

Useful Links

(Main) Contacts


Platform Capabilities


Directory

Information

Getting Started

One-computer Set-up Guides

AD-EYE Instalation Set-up Guides

Windows:

Ubuntu:

AD-EYE Software

Software Engineering Guides

Git

Vector Mapping

OpenSCENARIO

NVIDIA Drive PX2 Guides

  • NVIDIA Drive PX2 (DPX2): This guide contains specifications of the NVIDIA Drive PX2 HW.
  • Reset DPX2 to factory settings: List of steps to reset DPX2 to factory settings.
  • Install Autoware DPX2: List of steps to install and set up Autoware.ai on the drive PX2. NOTE: Acording to the guide, this has not been tested since 2019.
  • Install AD-EYE on DPX2: List of steps to install and set up AD-EYE (w/ Autoware) on the drive PX2. It also contains encountered bugs and possible workarounds.

FAQ

Other Pages

Creating from real world data

Working with City skylines

Other


How-to-use-Git-on-Windows.md

GIT

To understand the basic workflow of GIT and GITHUB, you can follow this link.

https://product.hubspot.com/blog/git-and-github-tutorial-for-beginners

GIT on Windows

If you haven’t installed GIT already you can do it using this link.

https://git-scm.com/downloads

(Make sure you work with the latest version)

Once, GIT has been installed locally, you need to set it up before you can start creating repositories. For that first open the Git Bash (from the Start button).

To Clone a Repository from GitHub:

Go to your repository on GitHub. In the top right above the list of files, open the Clone or download drop-down menu. Copy the URL for cloning over HTTPS.

Then go to the Git Bash and enter the following command in the terminal – git clone repository_url

The working directory should now have a copy of the repository from GitHub. It should contain a directory with the name of the repository, AD-EYE_Core. Change to the directory using this command.

Configure GitHub Identity

Configure your local Git installation to use your GitHub mail and name by entering the following

Make sure to replace “github_username” and “useremail_address” with your own.

The links given below will help you understand the difference between local and global configuration.

https://stackoverflow.com/a/66108560 https://www.atlassian.com/git/tutorials/setting-up-a-repository/git-config

Note:- Sometimes a case may arise where you cannot configure your name and mail. In that case use the following command git config --list.

This shows the indentity of the user that will appear with each commit. If it is not the good one, use one of the following two commands git config --edit or git config --global --edit to solve the issue.

To know more about the git conventions and general rules go through the wiki page: https://gits-15.sys.kth.se/AD-EYE/AD-EYE_Core/wiki/Version-Control


Install-AD-EYE-on-DPX2.md

Install AD-EYE on DPX2

The installation of AD-EYE on the Drive PX2 follows a similar procedure as in our computers with the dual setup. So far we have tested AD-EYE w/ the following configuration:

  • Ubuntu version: 16.04
  • ROS: Kinetic
  • OpenCV: 2.4.9.1
  • CUDA: 9.2

Below, steps are provided to install some of the dependencies (if different from the original links in the install guides [ADD REFERENCE HERE]) and how to tackle any errors or bugs you may encounter. After reseting the PX2 (w/ NVIDIA Drive OS), the system already comes w/ Ubuntu 16.04 and CUDA installed.

Update your system

For the DPX2, the first step is to make sure your system is up to date.

  • to download the latest updates:
sudo apt update
  • to install them:
sudo apt upgrade

CUDA

Note that CUDA is already installed right after flashing the board with the NVIDIA DRIVE OS. To recognize it:

echo "export PATH=/usr/local/cuda/bin/:\$PATH" >> ~/.bashrc
echo "export LD_LIBRARY_PATH=/usr/local/cuda/targets/aarch64-linux/lib:\$LD_LIBRARY_PATH" >> ~/.bashrc

source ~/.bashrc 
nvcc -V # check the version of the CUDA compiler

ROS Kinetic

The main instructions can be followed in Install-ROS-Kinetic.

Full list of steps:

Make sure you follow the steps below one by one:

  • setup your sources.list & keys:
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'

sudo apt-key adv --keyserver 'hkp://keyserver.ubuntu.com:80' --recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654
  • fetch updates & install libraries:
sudo apt-get update

sudo apt-get install -y build-essential cmake python-pip

sudo apt-get install -y checkinstall

sudo apt-get install -y libavutil-ffmpeg54

sudo apt-get install -y libswresample-ffmpeg1

sudo apt-get install -y libavformat-ffmpeg56

sudo apt-get install -y libswscale-ffmpeg3

sudo apt-get install aptitude

sudo aptitude install libssl-dev # follow instructions in the wiki for installing this library (and downgrade libssl-dev)

sudo apt-get install -y libnlopt-dev freeglut3-dev qtbase5-dev libqt5opengl5-dev libssh2-1-dev libarmadillo-dev libpcap-dev gksu libgl1-mesa-dev libglew-dev
  • Install ROS Kinetic and dependencies for building packages:
sudo apt-get install -y ros-kinetic-desktop-full

sudo apt-get install -y ros-kinetic-nmea-msgs ros-kinetic-nmea-navsat-driver ros-kinetic-sound-play ros-kinetic-jsk-visualization ros-kinetic-grid-map ros-kinetic-gps-common

sudo apt-get install -y ros-kinetic-controller-manager ros-kinetic-ros-control ros-kinetic-ros-controllers ros-kinetic-gazebo-ros-control ros-kinetic-joystick-drivers

sudo apt-get install -y ros-kinetic-camera-info-manager-py ros-kinetic-camera-info-manager

sudo apt-get install -y python-rosdep python-rosinstall-generator python-wstool python-rosinstall build-essential

sudo rosdep init
rosdep update
  • Environment setup (so that ROS is recognized):
echo "source /opt/ros/kinetic/setup.bash" >> ~/.bashrc
source ~/.bashrc

Errors you might meet:

  • If broken ROS dependencies show up when installing ROS, autoware or when running the simulation, they can be fixed by executing:
sudo apt install ros-kinetic-'name of package'

For instance, in our case we had a rviz package missing which got fixed by executing:

sudo apt install ros-kinetic-jsk-rviz-plugins
  • When you run sudo apt-get install ros-kinetic-desktop-full, you might get this error:
Reading package lists... Done

Building dependency tree

Reading state information... Done

Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation:

The following packages have unmet dependencies:  ros-kinetic-desktop-full :

Depends: ros-kinetic-desktop but it is not going to be installed

Depends: ros-kinetic-perception but it is not going to be installed

Depends: ros-kinetic-simulators but it is not going to be installed E: Unable to correct problems, you have held broken packages.

You should run sudo apt-get install aptitude and then sudo aptitude install libssl-dev to downgrade the version of libssl-dev.

NOTE: It's very important to notice the 0 to remove of the provided solution below, if it is not, rerun this command again!

adeye@tegra-ubuntu:/etc/apt/sources.list.d$ sudo aptitude install libssl-dev
The following NEW packages will be installed:
  libssl-dev{b} libssl-doc{a} 
0 packages upgraded, 2 newly installed, 0 to remove and 30 not upgraded.
Need to get 1,077 kB/2,123 kB of archives. After unpacking 9,388 kB will be used.
The following packages have unmet dependencies:
 libssl-dev : Depends: libssl1.0.0 (= 1.0.2g-1ubuntu4.2) but 1.0.2g-1ubuntu4.5 is installed.
The following actions will resolve these dependencies:

     Keep the following packages at their current version:
1)     libssl-dev [Not Installed]                         

Accept this solution? [Y/n/q/?] n
The following actions will resolve these dependencies:

     Install the following packages:                                            
1)     libssl-dev [1.0.2g-1ubuntu4 (xenial)]                                    

     Downgrade the following packages:                                          
2)     libssl1.0.0 [1.0.2g-1ubuntu4.5 (<NULL>, now) -> 1.0.2g-1ubuntu4 (xenial)]

Accept this solution? [Y/n/q/?] y
The following packages will be DOWNGRADED:
  libssl1.0.0 
The following NEW packages will be installed:
  libssl-dev libssl-doc{a} 
0 packages upgraded, 2 newly installed, 1 downgraded, 0 to remove and 30 not upgraded.
Need to get 2,849 kB of archives. After unpacking 9,457 kB will be used.
Do you want to continue? [Y/n/?] y
Get: 1 http://ports.ubuntu.com/ubuntu-ports xenial/main arm64 libssl1.0.0 arm64 1.0.2g-1ubuntu4 [726 kB]
Get: 2 http://ports.ubuntu.com/ubuntu-ports xenial/main arm64 libssl-dev arm64 1.0.2g-1ubuntu4 [1,046 kB]
Get: 3 http://ports.ubuntu.com/ubuntu-ports xenial-security/main arm64 libssl-doc all 1.0.2g-1ubuntu4.15 [1,077 kB]
Fetched 2,849 kB in 0s (5,572 kB/s)   
Preconfiguring packages ...
dpkg: warning: downgrading libssl1.0.0:arm64 from 1.0.2g-1ubuntu4.5 to 1.0.2g-1ubuntu4
(Reading database ... 166815 files and directories currently installed.)
Preparing to unpack .../libssl1.0.0_1.0.2g-1ubuntu4_arm64.deb ...
Unpacking libssl1.0.0:arm64 (1.0.2g-1ubuntu4) over (1.0.2g-1ubuntu4.5) ...
Selecting previously unselected package libssl-dev:arm64.
Preparing to unpack .../libssl-dev_1.0.2g-1ubuntu4_arm64.deb ...
Unpacking libssl-dev:arm64 (1.0.2g-1ubuntu4) ...
Selecting previously unselected package libssl-doc.
Preparing to unpack .../libssl-doc_1.0.2g-1ubuntu4.15_all.deb ...
Unpacking libssl-doc (1.0.2g-1ubuntu4.15) ...
Processing triggers for libc-bin (2.23-0ubuntu11) ...
Processing triggers for man-db (2.7.5-1) ...
Setting up libssl1.0.0:arm64 (1.0.2g-1ubuntu4) ...
Setting up libssl-dev:arm64 (1.0.2g-1ubuntu4) ...
Setting up libssl-doc (1.0.2g-1ubuntu4.15) ...
Processing triggers for libc-bin (2.23-0ubuntu11) ...

SSDCaffe

The main steps for installing SSDCaffe are listed here.

  • Note that SSDCaffe requires OpenCV:
sudo apt-get install libopencv-dev 

One can check if it is installed, using the following commands:

  • check installed OpenCV libraries/dependencies

dpkg -l | grep libopencv

  • check OpenCV version:

pkg-config --modversion opencv

  • The remaining dependencies are installed w/ the steps bellow:
sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev  

sudo apt-get install libhdf5-serial-dev protobuf-compiler

sudo apt-get install --no-install-recommends libboost-all-dev

sudo apt-get install libgoogle-glog-dev

sudo apt-get install liblmdb-dev

sudo apt-get install libopenblas-dev
  • Clone the SSDCaffe repository and switch to the recomended branch:
git clone-b ssd https://github.com/weiliu89/caffe.git ssdcaffe

cd ~/ssdcaffe

git checkout 4817bf8b4200b35ada8ed0dc378dceaf38c539e4
  • Follow the Install SSDCAFE guide to modify the Makefile and Makefile.config files.

  • Compile the library (so build is generated):

sudo make clean
make all -j6
make test -j6
make runtest -j6

make && make distribute # compile SSDCaffe
  • Add build path to ~/.bashrc file (so SSDCaffe is recognized):
echo "export LD_LIBRARY_PATH=/usr/local/cuda/lib64:/home/adeye/ssdcaffe/build/lib:\$LD_LIBRARY_PATH" >> ~/.bashrc
source ~/.bashrc

Errors you might meet:

Most of the errors met during this installation process and their solutions can be found Install SSDCaffe guide. However, here we emphasize other errors that may occur when running AD-EYE:

  • Fix error:
Permission denied: "/home/adeye/ssdcaffe/results/SSD_512X512"   
  [vision_ssd_detect-19] process has died

Solution: The path of the neural network in vision_ssd_detect is incorrect, it should be changed to the correct path. The path is set by the file deploy.prototxt which should be found in a path similar to what we found: /home/adeye/AD-EYE_Core/AD-EYE/Data/ssdcaffe_models/AD-EYE_SSD_Model/SSD_512x512. Note that we are assuming that AD-EYE_Core is in /home/adeye/.

The folder path code in deploy.prototxt can be found at lines 1821, 1824 and 1825. If you cannot find the file or line simply use the linux grep command, e.g. grep -Hrn 'search term' path/to/files where path/to/files can be omitted if you're already in the correct folder.

  • Fix error:
[vision_ssd_detect-18] process has died...

Solution: If the methods given here [CHANGE LINK] do not work, the following method can be tried:

  1. create a file caffe.conf in the folder /etc/ld.so.conf.d
  2. add the path of libcaffe.so.1.0.0-rc3 (found in /ssdcaffe/build/lib) into the file caffe.conf
  3. run sudo ldconfig

NOTE:

  1. As a complement to the modifications [CHANGE LINK] in the Makefiles, for the PX2, choose sm=61 and sm=62
  2. During the compilation process, make runtest will report several broken tests, but this won't cause a real error for the SSDCaffe not to work on DPX2.

More information here: https://devtalk.nvidia.com/default/topic/1066619/errors-when-build-the-single-shot-detector-ssd-on-px2/

Autoware and AD-EYE

In order to install autoware & AD-EYE, follow the main steps mentioned here. You may find other errors different than those mentioned there:

Errors you might meet:

  • Missing package while building Autoware
CMake Error at /opt/ros/kinetic/share/catkin/cmake/catkinConfig.cmake:83 (find_package):
  Could not find a package configuration file provided by "nmea_msgs" with
  any of the following names:

    nmea_msgsConfig.cmake
    nmea_msgs-config.cmake

  Add the installation prefix of "nmea_msgs" to CMAKE_PREFIX_PATH or set
  "nmea_msgs_DIR" to a directory containing one of the above files.  If
  "nmea_msgs" provides a separate development package or SDK, be sure it has
  been installed.
Call Stack (most recent call first):
  CMakeLists.txt:4 (find_package)


---
Failed   <<< autoware_bag_tools [9.69s, exited with code 1]
Aborted  <<< op_simu [9.25s]
Aborted  <<< libvectormap [20.2s]
Aborted  <<< waypoint_follower [7min 16s]
Aborted  <<< imm_ukf_pda_track [4min 25s]

Summary: 26 packages finished [13min 6s]
  2 packages failed: autoware_bag_tools object_map
  4 packages aborted: imm_ukf_pda_track libvectormap op_simu waypoint_follower
  10 packages had stderr output: astar_search autoware_bag_tools kitti_player map_file ndt_cpu ndt_gpu object_map pcl_omp_registration vector_map_server waypoint_follower
  74 packages not processed

Solution: sudo apt-get update, and then followed by sudo apt-get install -y 'your_missing_package_name' (in this case it would be sudo apt-get install ros-kinetic-nmea-msgs. Note that some packages were not installed when sudo apt-get install -y ros-kinetic-desktop-full was executed).

Connection between DPX2 and Prescan (Windows) when testing AD-EYE

If the connection/communication between the prescan computer (host) and the PX2 is not working but no error messages are displayed on the host computer, it is most likely due to the argument of the command used to setup the connection. The command used is rosinit('IP_OF_COMPUTER') where IP_OF_COMPUTER can either be the network address or the set name associated with the IP. Due to a Prescan bug, the command should always use the name which is, unless changed, tegra-ubuntu.

To associate the IP with a name add the IP address and name to the file C:\Windows\System32\drivers\etc\hosts.

Precautions for embedded system

Disk space

The limited disk space of PX2 may cause errors during installation steps, so always keep an eye on the remaining space and clean up useless files. Some useful tips:

  1. Download stuff on a mobile hard drive, but care about the dependencies if installed software on a mobile hard drive.
  2. Use rosclean purge to clean up the ros log files. For more information: http://wiki.ros.org/rosclean

Errors you might meet:

  • Error to fix:

Fix "Package exfat-utils is... (Hard drive cannot be recognized)

Solution:

sudo add-apt-repository "deb http://archive.ubuntu.com/ubuntu $(lsb_release -sc) universe"
sudo add-apt-repository universe
sudo apt-get update
sudo apt-get install exfat-fuse exfat-utils 

Source: https://unix.stackexchange.com/questions/321494/ubuntu-16-04-package-exfat-utils-is-not-available-but-is-referred-to-by-anothe

Memory (RAM)

Ubuntu 16.04 on DPX2 has less than 6 Gb RAM space, while Autoware installation may need more space. This process will always get stuck and finally terminated with errors, besides keeping applications with large RAM space occupied closing (e.g. browser),swap space may be needed.

There is a trade-off between disk space and ram space, in our cases, we allocate 6-8 Gb (preferably 8 Gb) space for swapfile. Follow this guide for creating a swap space.

Measuring GPU utilization and performance

NVIDIA Nsight systems tools (including nvprof and nsight visual profiler) are performance tools provided by NVIDIA. They are part of the CUDA toolkit which should already be installed when the PX2 is flashed (the steps under Driver and CUDA, also given here)

Note however, that the tools can only be used remotely to profile the PX2 via an SSH connection between the host and target hardware (PX2). If the PX2 is flashed using an SDK manager, on the host, the SDK manager will install the CUDA toolkit that matches the one installed on the target, it's important that they match (!).

The CUDA toolkits and nsight systems performance tools that can be downloaded directly from the website are not supported on the PX2. Please refer to the following link: https://devtalk.nvidia.com/default/topic/1052052/profiling-drive-targets/connection-error/

Nvidia Nsight systems via SSH connection was not used here because the profiling process needs to be able to terminate and restart the application multiple times. This is problematic since we would need to terminate and restart the Prescan simulation as well, but this is difficult because we have no control or knowledge of when the tool is doing this.

Using nvprof and nsight visual profiler

You can generate a timeline using nvprof in the terminal, locally on the PX2. However, to visualise and get some statistics and recommendations of optimisations, use the host computer, import the timeline into a visual profiler to get some statistics on gpu utilisation of cuda applications (nodes).

  1. On target run:
nvprof --export-profile <path/to/file/timeline%p.prof> --profile-child-processes roslaunch adeye manager.launch
  1. Move files to any directory of your choice on host: go to directory where the files are saved and run the following /usr/local/cuda-9.2/libnvvp/nvvp <timeline#.prof> where <timeline#.prof> with correct filename and /usr/local/cuda-9.2/libnvvp/nvvp is the path to the visual profiler in the CUDA toolkit installed by the SDK manager.

For more information on nvprof and visual profiler refer to the NVIDIA documentation website: https://docs.nvidia.com/cuda/profiler-users-guide/index.html

Please also note that tegrastats does not provide correct dGPU statistics on the Drive PX2.

GPU memory usage

By compiling the code in the file gpustats.cu and running the executable file, information about all present GPUs will be printed in the terminal followed by the memory usage in percentage for the currently used GPU.

To compile the code, execute the following command in the terminal. Note that CUDA has to be installed before doing this step.

nvcc /path to file/gpustats.cu -o gpustats

To execute the runnable file created, execute the following command:

./path to file/gpustats

As stated above, the program starts by retrieving and printing the info for the present GPUs. It does so by using a function from the CUDA Runtime API which returns a cudaDeviceProp structure containing 69 data fields corresponding to the GPU. The function is executed on the host (CPU) and is stated as follows.

cudaGetDeviceProperties(cudaDeviceProp* prop, int  device)

where prop is a pointer to a cudaDeviceProp struct and device is an integer that encodes the ID of the wanted device. More information on what data fields are available in the structure and about the function could be found here.

After retrieving and printing the GPU info the program continues into a loop that retrieves the free and total device memory which is used to later calculate the used memory. Before calculating and printing the memory usage the program retrieves the device currently being used. More information can be found in the CUDA Runtime API guide from NVIDIA.


Installation.md

Dual computer setup

Windows

Linux

Setup host names


One Computer Setup


Nvidia Drive PX2


Install-Autoware-and-AD-EYE.md

Cloning the repository

Clone the AD-EYE repository and checkout to the dev branch

git clone https://github.com/AD-EYE/AD-EYE_Core
cd AD-EYE_Core
git checkout dev
git submodule update --init --recursive

After cloning the repository you can run the script to install Autoware and its dependencies or else you can do them manually by following the steps mentioned below

Installation by running the scripts

Modify the boolean WITH_CUDA in the file Helper_Scripts/Install_AD-EYE.bash to suit your setup. Then run:

bash Helper_Scripts/Install_AD-EYE.bash

You will be prompted git username and token as the different required repositories will be cloned.

Manual Installation

Getting the repository

Get all the Autoware repositories

cd autoware.ai
./autoware.ai.repos.run

Install Autoware

System dependencies for Ubuntu 16.04 / Kinetic
sudo apt-get update
sudo apt-get install -y python-catkin-pkg python-rosdep ros-$ROS_DISTRO-catkin gksu
sudo apt-get install -y python3-pip python3-colcon-common-extensions python3-setuptools python3-vcstool
sudo apt-get install openni2-doc openni2-utils openni-doc openni-utils libopenni0 libopenni-sensor-pointclouds0 libopenni2-0 libopenni-sensor-pointclouds-dev libopenni2-dev libopenni-dev libproj-dev
pip3 install -U setuptools
Install dependencies using rosdep
rosdep update --rosdistro=kinetic
rosdep install -y --from-paths src --ignore-src --rosdistro $ROS_DISTRO

If one gets issues rosdep, due to an outdated version of rospkg, an update can be made using pip install rospkg==required_version

Compile the workspace

The compilation must be done in AD-EYE_Core/autoware.ai.

With CUDA support

AUTOWARE_COMPILE_WITH_CUDA=1 colcon build --cmake-args -DCMAKE_BUILD_TYPE=Release

Without CUDA Support

colcon build --cmake-args -DCMAKE_BUILD_TYPE=Release
Update the bashrc
echo "source $HOME/AD-EYE_Core/autoware.ai/install/setup.bash --extend" >> ~/.bashrc
exec bash
Return to AD-EYE_Core folder
cd ..

Install AD-EYE

Install AD-EYE dependencies
sudo apt-get install ros-kinetic-costmap-2d
sudo apt-get install ros-kinetic-navigation
sudo apt-get install ros-kinetic-grid-map
sudo apt-get install ros-kinetic-rosbridge-suite
Compile the workspace
cd AD-EYE/ROS_Packages
catkin_make
Update the bashrc
echo "source $HOME/AD-EYE_Core/AD-EYE/ROS_Packages/devel/setup.bash --extend" >> ~/.bashrc
exec bash

Install OpenSSH

OpenSSH is required for Test Automation and OpenScenario.

sudo apt install openssh-client
sudo apt install openssh-server

Source:

https://help.ubuntu.com/lts/serverguide/openssh-server.html


Possible issue while compiling the autoware when we run the script (libGL.so)

During the process of compiling the autoware we may encounter an issue related to libGL.so. In that case, you need to reinstall the NVIDIA Drivers and compile again.


Next step: Install VectorMapper
Back to the overview: Installation

Install-Autoware-PX2.md

NOTE: THIS GUIDE IS OUTDATED SINCE 2019 (STEPS BELOW NEED TO BE REVISED) - KEPT NOW FOR REFERENCE

Updates

Ubuntu 16.04 with proposed packages 2018/10/22:

sudo apt update

sudo apt upgrade

to fix gnome-terminal set this: set: LC_ALL=en_US.UTF-8

Preparations for ros/autoware installation

1. Replace graphics libraries supplied by nvidia with default ones

mkdir -p backup/usr/lib

sudo cp -a /usr/lib/libdrm* backup/usr/lib

sudo cp -a /usr/lib/libwayland-* backup/usr/lib

mkdir -p backup/etc/nvidia

sudo cp -a /etc/nvidia/nvidia_gl.conf backup/etc/nvidia

sudo cp -a /etc/nvidia/nvidia_egl.conf backup/etc/nvidia

sudo apt-get install --reinstall -y libdrm2 libdrm-dev libwayland-client0 libwayland-cursor0 libwayland-egl1-mesa libwayland-server0 libwayland-dev

sudo ldconfig

2. Reboot

Reboot the computer

ROS installation

sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'

sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net --recv-key 421C365BD9FF1F717815A3895523BAEEB01FA116

sudo apt-get update

sudo apt-get install -y build-essential cmake python-pip

sudo apt-get install -y checkinstall

sudo apt-get install -y python-rosdep python-rosinstall-generator python-wstool python-rosinstall build-essential

sudo apt-get install -y libavutil-ffmpeg54

sudo apt-get install -y libswresample-ffmpeg1

sudo apt-get install -y libavformat-ffmpeg56

sudo apt-get install -y libswscale-ffmpeg3

sudo apt-get install -y libssl1.0.0=1.0.2g-1ubuntu4.13

sudo apt-get install -y libssl-dev=1.0.2g-1ubuntu4.13

sudo apt-get install -y ros-kinetic-desktop-full

sudo apt-get install -y ros-kinetic-nmea-msgs ros-kinetic-nmea-navsat-driver ros-kinetic-sound-play ros-kinetic-jsk-visualization ros-kinetic-grid-map ros-kinetic-gps-common

sudo apt-get install -y ros-kinetic-controller-manager ros-kinetic-ros-control ros-kinetic-ros-controllers ros-kinetic-gazebo-ros-control ros-kinetic-joystick-drivers

sudo apt-get install -y libnlopt-dev freeglut3-dev qtbase5-dev libqt5opengl5-dev libssh2-1-dev libarmadillo-dev libpcap-dev gksu libgl1-mesa-dev libglew-dev

sudo apt-get install -y ros-kinetic-camera-info-manager-py ros-kinetic-camera-info-manager

Autoware Installation

source /opt/ros/kinetic/setup.bash

sudo apt-get install -y openssh-server libnlopt-dev freeglut3-dev qtbase5-dev libqt5opengl5-dev libssh2-1-dev libarmadillo-dev libpcap-dev git

sudo apt-get install -y libnlopt-dev freeglut3-dev qt5-default libqt5opengl5-dev libssh2-1-dev libarmadillo-dev libpcap-dev libglew-dev gksu

sudo apt-get install -y libxmu-dev python-wxgtk3.0 python-wxgtk3.0-dev

sudo ln -s /usr/include/aarch64-linux-gnu/qt5 /usr/include/qt5

sudo ln -s /usr/local/cuda/lib64/libcudart.so /usr/lib/libcudart.so

cd

git clone https://github.com/CPFL/Autoware.git

cd ~/Autoware

git submodule update --init --recursive

cd ~/Autoware/ros/src

catkin_init_workspace

cd ..

rosdep install -y --from-paths src --ignore-src --rosdistro $ROS_DISTRO

./catkin_make_release -j 1

To run demo follow this video:

https://www.youtube.com/watch?v=OWwtr_71cqI


To add a new experiment:

Appy patches according to jose instructions, and re-compile using: ./catkin_make_release -j 1

  1. Move the data files to .autoware/data
  2. move quicklaunch files to where the manager.launch file loads them from
  3. edit quicklaunch files so they load the files correctly

To run experiments

make sure these lines are in your .bashrc file source /opt/ros/kinetic/setup.bash

export PATH=${PATH}:/usr/local/autoware/bin

source ~/Autoware/ros/devel/setup.bash

Open 3 terminals and run these commands: ros

cd ~/Autoware/ros && ./run

roslaunch manager manager.launch

Source

https://github.com/CPFL/Autoware/wiki/Source-Build

https://github.com/CPFL/Autoware/blob/master/docs/en/installation_with_drivepx2.md


Install-Caffe.md

Install SSDCAFE

  1. Complete the pre-requisites. Make sure the version of OpenCV that you've installed is at least 2.4.10.

NOTE: From this step, you can download a script to automatically install ssdCaffe. This script is located in the dev branch, in the Helper Script folder.

WARNING: For it to work, you have to have installed cuDNN (see previously) and openCV must be version 2. If you have version 3, this is not a big issue as you can modify a file to take this into account (see the script for further information). If the installation doesnt work, it may be because of the line make distribute which can be responsible for some bugs (especially when opencv version 2.x is installed). You should comment out this particular line if the installation doesnt work.

  1. Clone the SSD Caffe fork in your home directory (CMake files will be looking for it there).
git clone -b ssd https://github.com/weiliu89/caffe.git ssdcaffe
cd ssdcaffe
git checkout 4817bf8b4200b35ada8ed0dc378dceaf38c539e4
  1. Delete out -gencode arch=compute_20,code=sm_20 \ and -gencode arch=compute_20,code=sm_21 \ in Makefile.config and add -gencode arch=compute_75,code=sm_75 \ if using a RTX 2080 (the codes matching different GPU achitectures can be found here). Append /usr/include/hdf5/serial/ to INCLUDE_DIRS at line 92 in Makefile.config.
--- INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include
+++ INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial/

Modify hdf5_hl and hdf5 to hdf5_serial_hl and hdf5_serial at line 181 in Makefile

--- LIBRARIES += glog gflags protobuf boost_system boost_filesystem boost_regex m hdf5_hl hdf5
+++ LIBRARIES += glog gflags protobuf boost_system boost_filesystem boost_regex m hdf5_serial_hl hdf5_serial

Finally, run the following commands:

sudo apt install liblapack-dev liblapack3 libopenblas-base libopenblas-dev
sudo apt-get install libgflags-dev libgoogle-glog-dev liblmdb-dev
  1. Follow the authors' instructions to complete the pre-requisites for compilation: http://caffe.berkeleyvision.org/installation.html#compilation

  2. Compile Caffe:

make && make distribute
  1. Add the following line to the ~/.bashrc file:
export LD_LIBRARY_PATH=/home/adeye/ssdcaffe/build/lib:$LD_LIBRARY_PATH

Source: http://caffe.berkeleyvision.org/installation.html#compilation

https://web.archive.org/web/20190703125543/http://caffe.berkeleyvision.org/installation.html

Notes

Remember to modify the deploy.prototxt to use the FULL PATH to the VOC label prototxt file.

Open the file $SSDCAFFEPATH/models/VGGNet/VOC0712/SSD_512x512/deploy.prototxt

Change line 1803 to point to the full path of the labelmap_voc.prototxt file.

Source: https://github.com/autowarefoundation/autoware/blob/master/ros/src/computing/perception/detection/vision_detector/packages/vision_ssd_detect/README.md

Make: protoc issue:

Problem:

make: protoc: Command not found

Solution: sudo apt-get install protobuf-compiler

Fix hdf5 naming problem

Your machine may report the following error when compiling Caffe even though libhdf5-serial-dev package has been already installed in it.

./include/caffe/util/hdf5.hpp:6:18: fatal error: hdf5.h: No such file or directory

This is because of change of default path and name of hdf5 head files and libraries in Ubuntu 15.10. To solve this problem, we can simply modify Makefile files.

Append /usr/include/hdf5/serial/ to INCLUDE_DIRS at line 92 in Makefile.config.

--- INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include
+++ INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial/

Modify hdf5_hl and hdf5 to hdf5_serial_hl and hdf5_serial at line 181 in Makefile

--- LIBRARIES += glog gflags protobuf boost_system boost_filesystem boost_regex m hdf5_hl hdf5
+++ LIBRARIES += glog gflags protobuf boost_system boost_filesystem boost_regex m hdf5_serial_hl hdf5_serial

Source: https://gist.github.com/wangruohui/679b05fcd1466bb0937f#fix-hdf5-naming-problem

Fix "undefined reference to boost:: ..."

Just add boost_regex to the LIBRARY line (line 181) in Makefile.

--- LIBRARIES += glog gflags protobuf boost_system boost_filesystem boost_regex m hdf5_serial_hl hdf5_serial
+++ LIBRARIES += glog gflags protobuf boost_system boost_filesystem boost_regex m hdf5_serial_hl hdf5_serial

Source: https://github.com/rbgirshick/fast-rcnn/issues/52

Fix "undefined reference to cv2:: ..."

Make sure to have a compatible version of cv2 installed. In my case, the problem has been solved by installing the 2.4.13 and note that the 2.4.9 doesn't seem to work.

Fix "undefined reference to cv::VideoCapture::... / cv::VideoWriter::..."

This occurs when some libopencv are not included as libraries.

Solution : Edit the Makefile as follows. Go to the line 204, after some conditional statements, just before
"PYTHON_LIBRARIES ?= boost_python python2.7"
And add the following line : LIBRARIES += opencv_videoio opencv_imgcodecs

Note : By adding only opencv_videoio without opencv_imgcodecs, you will have a
undefined reference to symbol '_ZN2cv6imreadERKNS_6StringEi' error.
Which can be fixed by adding opencv_imgcodecs to the LIBRARIES

Fix "numpy/arrayobject.h: No such file or directory"

Add to Makefile.config the path where numpy is installed line 64. In my case, I had to do the following change:

--- PYTHON_INCLUDE := /usr/include/python2.7 \
		/usr/lib/python2.7/dist-packages/numpy/core/include \
+++ PYTHON_INCLUDE := /usr/include/python2.7 \
		/usr/lib/python2.7/dist-packages/numpy/core/include \
                /usr/local/lib/python2.7/dist-packages/numpy/core/include

Fix "[vision_ssd_detect-18] process has died..."

I do not know if these two fixes effectively resolve the bug but I did that and it worked afterwards.

First, add -gencode arch=compute_XX,code=sm_XX to the CUDA_ARCH line 35 by replacing XX by the correct number (e.g. 75 for RTX GPU).

In some cases, the library libcaffe.so.1.0.0-rc3 is not found at execution time. To solve it:

In the bashrc file, in addition to the CUDA path in the LD_LIBRARY_PATH variable, add the path to the file libcaffe.so.1.0.0-rc3 and separate the two paths with a colon. For me, it was:

export LD_LIBRARY_PATH=/usr/local/cuda/lib64:/home/adeye/ssdcaffe/build/lib:$LD_LIBRARY_PATH

Fix Gflags/glog issues

Error : In files included from src/caffe/net.cpp ./include/caffe/common.hpp:_numberofline_:_numberofline_ fatal error: gflags/gflags.h **OR** glog/logging.h: No such file or directory compilation terminated recipe for target build_release/src/caffe/net.o failed

Fix : sudo apt-get install libgflags-dev and sudo apt install libgoogle-glob-dev

Fix opencv_imgcodecs opencv_videoio issues:

Error : /usr/bin/ld: cannot find -lopencv_imgcodecs /usr/bin/ld: cannot find -lopencv_videoio collect2: error: ld returned 1 exit status Makefile:_numberofline_: recipe for target .build_release/lib/libcaffe.so.1.0.0-rc5 failed make: *** [.build_release/lib/libcaffe.so.1.0.0-rc5] Error 1

Fix : Open the Makefile with your favorite text editor and locate the following line: LIBRARIES += glog gflags .... and add opencv_imgcodecs to it

Issue about the new models

https://github.com/autowarefoundation/autoware/issues/1020

Autoware details

Run the node

Click to expand Once compiled, run from the terminal, or launch from RunTimeManager:
roslaunch vision_ssd_detect vision_ssd_detect network_definition_file:=/PATH/TO/deploy.prototxt pretrained_model_file:=/PATH/TO/model.caffemodel

Remember to modify the launch file located inside AUTOWARE_BASEDIR/ros/src/computing/perception/detection/vision_detector/packages/vision_ssd_detect/launch/vision_ssd_detect.launch and point the network and pre-trained models to your paths.

Launch file params

Click to expand
Parameter Type Description Default
use_gpu Bool Whether to use GPU acceleration. true
gpu_device_id Integer ID of the GPU to be used. 0
score_threshold Double Value between 0 and 1. Defines the minimum score value to filter detections. 0.5
network_definition_file String Path to the prototxt file $SSDCAFFEPATH/models/VGGNet/VOC0712/SSD_512x512/deploy.prototxt
pretrained_model_file String Path to the pcaffemodel file $SSDCAFFEPATH/models/VGGNet/VOC0712/SSD_512x512/VGG_VOC0712_SSD_512x512_iter_120000.caffemodel
camera_id String Camera ID to subscribe, i.e. camera0 /
image_src String Name of the image topic that should be subscribe to /image_raw

Next step: Install Autoware and AD EYE
Back to the overview: Installation

Install-CUDA-and-graphics-drivers.md

Installation of NVIDIA Driver, CUDA and cuDNN

Disable Nouveau (default graphic driver on Ubuntu)

  1. Open /etc/modprobe.d/blacklist-nouveau.conf and add the following lines:

    blacklist nouveau
    options nouveau modeset=0

    Save it (sudo privilege may be required).

  2. Run sudo update-initramfs -u and reboot system.

Install NVIDIA GPU Driver

Download the latest NVIDIA GPU driver (.run file) from http://www.nvidia.com/Download/index.aspx

  1. Set the default run level on your system such that it will boot to a VGA console, and not directly to X. Doing so will make it easier to recover if there is a problem during the installation process. On Ubuntu:
    Before installation:

    sudo systemctl enable multi-user.target
    sudo systemctl set-default multi-user.target
    

    After installation has succeeded:

    sudo systemctl enable graphical.target
    sudo systemctl set-default graphical.target
    
  2. If graphical login-screen appears, press [Alt] + [Ctrl] + [F1] and login in the tty.

  3. Run sudo service lightdm stop to kill the X server temporarily.

  4. Remove all nvidia packages: sudo apt-get remove --purge nvidia*.

  5. Run sudo sh NVIDIA-*.run or $sudo sh NVIDIA-*.run --no-opengl-files (on laptops that have both integrated graphic card and NVIDIA-GPU).

    NOTE: DO NOT run the NVIDIA configuration for X windowing system at the end of the installation of the GPU driver on laptop, since the integrated graphic card will be used to display the desktop. The NVIDIA card will run whenever needed automatically.

  6. Reboot system

Install CUDA

Download a CUDA installer (.run file) from https://developer.nvidia.com/cuda-downloads with a version <= 10.0 to avoid issues with autoware.

  1. Run sudo sh cuda_***.run Do not install GPU driver contained in CUDA installer since you have already installed the latest one in the previous section.

  2. Once the installation completes, export PATH and LD_LIBRARY_PATH according to the installer message. EX. open ~/.bashrc and add these two lines:

    export PATH=/usr/local/cuda/bin:$PATH
    export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH

Some additional packages may be required in order to compile the CUDA samples: sudo apt-get install freeglut3-dev build-essential libx11-dev libxmu-dev libxi-dev libgl1-mesa-glx libgl1-mesa-dev

If you receive a compiler error such as /usr/bin/ld: cannot find -lGL, this command may resolve it:

sudo ln -s /usr/lib/libGL.so.1 /usr/lib/libGL.so

Source: https://github.com/autowarefoundation/autoware/wiki/NVIDIA-Driver

https://web.archive.org/web/20190620095443/https://github.com/autowarefoundation/autoware/wiki/NVIDIA-Driver

Install cuDNN

CuDNN could be useful if you have to do heavy GPU computation (like Deep Learning stuff). Everything's really well explained in the source link and no bugs were encountered during the installation and verification process.

Source: https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html

https://web.archive.org/save/https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html

Problems related to Cuda and graphics drivers

Login Loop

After updating the Ubuntu Kernel due to a security patch, the computer might get to a login loop. In that case, it is not possible to login into the user account.

The fix to this problem is to reinstall the NVIDIA Drivers using the virtual console (Ctrl)+(Alt)+(F1).

Computer stuck on boot screen

If the computer does not start after but hangs with an underscore in the top left corner after installing the graphics driver or Cuda it is likely the Kernel version of the graphics driver is not the same as the Client one, the nvidia package installed (nvidia-version_number, can be found using apt list --installed | grep nvidia).

To solve this issue restart the computer and choose the advanced Ubuntu options on the Grub menu and select the first recovery mode option. In the recovery menu select root. Having now access to the command line, remove all nvidia packages using:

apt-get remove --purge nvidia*.

Rebooting should now work. Type reboot. No graphics driver are installed anymore so install the driver corresponding to the Kernel version (see next part to find the Kernel version).

Finding nvidia versions after solving the unable to boot issue

The issue causing the computer to be stuck at boot is generally caused by a mismatch between the Client version (the one in the apt-get packages) and the Kernel one.

To investigate the problem, run the script nvidia-bug-report.sh (more information here, path: /usr/bin/nvidia-bug-report.sh ). It will generate a log archive in the working directory. In this log file, look for (ctrl + F) "API mismatch". This line will indicate both the Client and the Kernel versions when issues occured (look for the last occurence of issues). The client version has been uninstalled during the previous step so it has to be reinstalled with a version matching the Kernel one $ sudo apt-get install nvidia-version_number.

source: https://forums.developer.nvidia.com/t/cuda-9-1-on-ubuntu-16-04-installed-but-devicequery-fails/66945

Cuda is not working when trying to run the Caffe tests or to compile autoware

To investigate if Cuda is installed and working properly follow the next steps.

Check the graphics driver version driver:

$ cat /proc/driver/nvidia/version
     NVRM version: NVIDIA UNIX x86_64 Kernel Module  430.26  Tue Jun  4 17:40:52 CDT 2019
     GCC version:  gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.12) 

Check the CUDA Toolkit version:

$ nvcc -V
     nvcc: NVIDIA (R) Cuda compiler driver
     Copyright (c) 2005-2018 NVIDIA Corporation
     Built on Sat_Aug_25_21:08:01_CDT_2018
     Cuda compilation tools, release 10.0, V10.0.130

Verify the ability to compile cuda samples:

cd ~/
apt-get install cuda-samples-10-0 -y #if not installed
cd /usr/local/cuda-10.0/samples
make

Run CUDA GPU jobs by executing the deviceQuery program:

Click to see the command and the expected result
$ '/usr/local/cuda-10.0/samples/bin/x86_64/linux/release/deviceQuery' 
     /usr/local/cuda-10.0/samples/bin/x86_64/linux/release/deviceQuery Starting...

     CUDA Device Query (Runtime API) version (CUDART static linking)

     Detected 1 CUDA Capable device(s)

     Device 0: "GeForce RTX 2080 Ti"
       CUDA Driver Version / Runtime Version          10.2 / 10.0
       CUDA Capability Major/Minor version number:    7.5
       Total amount of global memory:                 11016 MBytes (11551440896 bytes)
       (68) Multiprocessors, ( 64) CUDA Cores/MP:     4352 CUDA Cores
       GPU Max Clock rate:                            1545 MHz (1.54 GHz)
       Memory Clock rate:                             7000 Mhz
       Memory Bus Width:                              352-bit
       L2 Cache Size:                                 5767168 bytes
       Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
       Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
       Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
       Total amount of constant memory:               65536 bytes
       Total amount of shared memory per block:       49152 bytes
       Total number of registers available per block: 65536
       Warp size:                                     32
       Maximum number of threads per multiprocessor:  1024
       Maximum number of threads per block:           1024
       Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
       Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
       Maximum memory pitch:                          2147483647 bytes
       Texture alignment:                             512 bytes
       Concurrent copy and kernel execution:          Yes with 3 copy engine(s)
       Run time limit on kernels:                     Yes
       Integrated GPU sharing Host Memory:            No
       Support host page-locked memory mapping:       Yes
       Alignment requirement for Surfaces:            Yes
       Device has ECC support:                        Disabled
       Device supports Unified Addressing (UVA):      Yes
       Device supports Compute Preemption:            Yes
       Supports Cooperative Kernel Launch:            Yes
       Supports MultiDevice Co-op Kernel Launch:      Yes
       Device PCI Domain ID / Bus ID / location ID:   0 / 66 / 0
       Compute Mode:
          < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

     deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.2, CUDA Runtime Version = 10.0, NumDevs = 1
     Result = PASS

If either the compilation or the deviceQuery program fail then there is an issue with the Cuda installation or with the graphics driver.

source: https://xcat-docs.readthedocs.io/en/stable/advanced/gpu/nvidia/verify_cuda_install.html


Next step: Install Caffe
Back to the overview: Installation

Install-Matlab.md

Matlab version should be 2019a or lower to be compatible with Prescan 2020.1. Instructions to download Matlab can be found here.

List of Add-ons to be included during Matlab installation:

  • Aerospace blockset
  • Aerospace toolbox
  • Computer vision toolbox
  • DSP System Toolbox
  • Image processing toolbox
  • Matlab coder
  • Navigation Toolbox (Only if you use Matlab 2019b or newer)
  • Robotic systems toolbox
  • ROS toolbox (Only if you use Matlab 2019b or newer)
  • Signal processing toolbox
  • Simulink coder

List of Add-ons that can be installed from the Add-ons Explorer (after installation):

  • MinGW-w64

Create a symlink to the AD-EYE Simulnk library:

Run the Command Prompt as administrator and execute the following command

mklink C:\Users\adeye\Documents\MATLAB\adeye_lib.slx C:\Users\adeye\AD-EYE_Core\AD-EYE\lib\adeye_lib.slx

Next step: Install PreScan
Back to the overview: Installation

Install-PreScan.md

The four installation files are in the box folder: https://kth.app.box.com/folder/53896402743. They need to be extracted and the .exe one should be ran.

The user defined library folder directory should be set to AD_EYE_Core\Prescan_models.

The generic model directory should be C:\Users\Public\Documents\Prescan\GenericModels.

The MATLAB application directory should be C:\Program Files\MATLAB\R2019a.

The experiments directory should be set to AD-EYE_Core\AD-EYE\Experiments.

The scenario extraction should be unticked.

License string: [email protected]


Next step: PreScan Code Generator
Back to the overview: Installation

Install-ROS-Kinetic.md

Setup your sources.list

sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'

Set up your keys

sudo apt-key adv --keyserver 'hkp://keyserver.ubuntu.com:80' --recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654

Installation

sudo apt-get update
sudo apt-get install ros-kinetic-desktop-full

Initialize rosdep

sudo rosdep init
rosdep update

Environment setup

echo "source /opt/ros/kinetic/setup.bash" >> ~/.bashrc
source ~/.bashrc

Dependencies for building packages

sudo apt install python-rosinstall python-rosinstall-generator python-wstool build-essential

Source

http://wiki.ros.org/kinetic/Installation/Ubuntu


Next step: Install CUDA and graphics drivers
Back to the overview: Installation

Install-VectorMapper.md

Cloning the repository

Open a terminal in the AD-EYE_Core folder. Make sure you are in the dev branch and type the following command:

git submodule update --init --recursive.

This will clone the Pex_Data_Extraction (vector mapper) repository which is a submodule of AD-EYE_Core.

Linux dependencies

sudo apt install python3-pip
pip3 install --upgrade pip
pip3 install progress
python3 -m pip install numpy
python3 -m pip install bezier (if not not working try pip3 install bezier==0.9.0)
python3 -m pip install matplotlib
python3 -m pip install sphinx
python3 -m pip install lxml
sudo apt-get install python3-tk

Windows dependencies

python -m pip install numpy --user
python -m pip install lxml --user
python -m pip install bezier --user
python -m pip install matplotlib --user

Verifying if the Vector Mapper works

Run the vector mapper on W01_base_map following the instructions here. If there was no error the folder AD-EYE_Core/Pex_Data_Extraction/pex2csv/csv should now contain ten files.

If the folder is not there and the execution of main.py end with an error then the folder needs to be created by hand.

Support for traffic lights is not visible in the vector map plot that follows its creation. However, the file signaldata.csv should not be empty as W01_base_map contains traffic lights. If it is, it might be that a Prescan update changed how traffic lights are defined in the pex files. In that case, the vector mapper's code needs to be updated accordingly.


Back to the overview: Installation

Instructions-for-applying-the-acs-patch.md

To Download the kernel patch Image and Headers click here

Run the following commands by navigating to the respective folders where we have installed them

sudo dpkg -i linux-headers-*-acso_*_amd64.deb

sudo dpkg -i linux-image-*-acso_*_amd64.deb

After installing the acs patch we need to reboot the system and select the installed acs patch in GRUB.

Install Network Connectivity Drivers

Name of the driver Link to download the driver
libelf-dev Download from box
2.5G Ethernet LINUX driver r8125 (Source) Download from web or Download from box
Ethernet, Killer E3000 realtek-r8125 (Compiled) Download from web or Download from box
dkms_2.2.0.3-2 Download from box
Installing dkms drivers without internet Instructions here
A6100 wifi adapter git

Run the following commands by navigating to the respective folders where we have installed them

  1. sudo dpkg -i dkms_***-***_all.deb

  2. sudo dpkg -i libelf-dev_***-***_amd64.deb (might have a dependency error, if so, just carry on)`

  3. sudo sh autorun.sh (Source Version before running this command extract the contents and then from the folder created run the file autorun.sh)

    (The alternative is using the compiled version: sudo dpkg -i realtek-r8125-***_amd64.deb. Note that it might not work with the acs patch)

  4. Reboot the system.

After installing the network drivers we need to install the graphics drivers.

Install NVIDIA GPU Driver

(You can download the latest NVIDIA GPU driver (.run file) from http://www.nvidia.com/Download/index.aspx)

  1. If graphical login-screen appears, press [Alt] + [Ctrl] + [F1] and login by virtual console (CUI environment).

  2. Execute $sudo service lightdm stop to kill X server temporarily.

  3. Remove all nvidia packages: $sudo apt-get remove --purge nvidia*.

  4. Execute $sudo sh NVIDIA-*.run or $sudo sh NVIDIA-*.run --no-opengl-files (on laptops that have both integrated graphic card and NVIDIA-GPU).

    NOTE: DO NOT run the NVIDIA configuration for X windowing system at the end of the installation of the GPU driver on laptop, since your integrated graphic card will be used to display the desktop. The NVIDIA card will run whenever needed automatically.

  5. Reboot system.

Computer Specifications

Computer name ACS patch GPU Driver
adeye07u 5.7.5-acso NVIDIA-Linux-x86_64-450.80.02.run
adeye08u 5.7.5-acso NVIDIA-Linux-x86_64-450.80.02.run

Tutorial: https://www.youtube.com/watch?v=JBEzshbGPhQ (guide for ACS Kernel patch)

Issue Building Autoware: missing libEGL.so

If there is an issue while trying to build autoware that mentions a missing ligEGL.so reinstall the graphics driver, reboot and build Autoware again.


Known-issues-and-possible-workarounds.md

Known issues

Symptom Simulink PreScan ROS Vector Map Git PC setup
Car is not moving 2 1, 3
Simulink stuck on initialization
PreScan model table missing 8
Error with array sizes 4 4
Wrong map / no map displayed in Rviz 1, 3
Wrong orientation of goal 5 5
Error generating vector map 6 6
MATLAB doesn't open from PreScan 7
Error generating pointcloud map 9
Issue with pip 10

Issue 1: ROS has the wrong map loaded because another Roscore is running with old parameters

What are the symptoms?

The simulation started (you see the car and sensor data in Rviz) and you have changed the world_name parameter in the launch files my_map.launch and SSMP.launch. The car is not moving and you get an error about the global planner being unable to find a path. In Rviz you do not see any vector map / point cloud map or you see the wrong one.

How to know if I have this specific issue?

Stop the roslaunch adeye manager_simulation.launch with a ctrl + c and run the command rosnode list. Two outcomes are possible:

  • you see ERROR: Unable to communicate with master!. You co not have this issue
  • you see a list of node: you have this issue, see below to fix it

How do I fix it?

Run the following two commands:

rosnode kill -a   #kills all nodes
killall rosmaster #kills the roscore

Issue 2: The car can't move because the position mode in Simulink is ticked.

What are the symptoms?

The simulation started (you see the car and sensor data in Rviz) and you have changed the world_name parameter in the launch files my_map.launch and SSMP.launch. The car is not moving and you can see the blue planning path. The state of the car is forward.

How to know if I have this specific issue?

Check if you have all the nodes that should be running, use command rqt_graph for base map and one for your map and see if there are any differences. Run the command rosnode info /pure_pursuit and rostopic echo /TwistS. If the node is alive and you can see message on /TwistS, you are likely to have this issue.

How do I fix it?

From the Simulink side, enter the dynamic empty block in ego car, double click the MuxState block and untick the position mode.

Issue 3: RViz shows a wrong map

What are the symptoms?

The simulation has started and is visible in Rviz but it shows another road map. There is no blue planning path since the road map is not the good one.

How to know if I have this specific issue?

Rviz shows a road map but it is the wrong map.

How do I fix it?

Update the world_name parameter in the launch files my_map.launch and SSMP.launch with the name of your world.

Issue 4: ROS array size error

What are the symptoms?

When you run a simulation for the first time you get an error saying "Error in port widths or dimensions. Output port X is a one dimensional vector with Y elements." "Error in port widths or dimensions. Input port Z is a one dimensional vector with W elements." where ports X and Z are connected.

How to know if I have this specific issue?

Simulation stops due to the above error.

How do I fix it?

Some of the messages sent in Simulink exceed the maximum array size and have to be manually modified the first time that the simulation is going to be executed in a computer. The message length can be modified in tools > Robotic Operating System > Manage array sizes. To modify the parameters, untick Use default limits for this message types. The parameters that need to be changed are:

Message type Array property Maximum length
sensor_msgs/Image Data 2073600
sensor_msgs/Image Encoding 4
std_msgs/Float32MultiArray Data 57600

Issue 5: Goal has a wrong orientation

What are the symptoms?

When you run the simulation, you notice on Rviz that the blue path doesn't follow the road lane untill the goal.

How to know if I have this specific issue?

Rviz opens correctly and shows the car sensors data and the car moves but the blue path to the goal is incorrect and doesn't follow the road lane near the goal.

How do I fix it?

Make sure that the orientation of the goal that you set in the ROS Send Goal block is correct. In Prescan, the x axis is to the right, the y axis is up and the z axis is coming out of the screen. To set the car orientation, only the Euler angle around z has should be non zero. To get the quaternions, check the following website. Enter the x, y and z values as the Euler Angles and click on Apply Rotation to get the Quaternion values. To make sure that the goal orientation is correct, check that the blue path is drawn correctly on the road lane till the goal point.

Issue 6: Error while trying to generate the vector map

What are the symptoms?

You have created the world in Prescan and are trying to generate the vector map from the Ubuntu system. You have modified the variable PEX_FILE_LOCATION in the file main.py in AD-EYE_Core/Pex_Data_Extraction/pex2csv to add the path to the pex file that should be transformed into a vector map. But still, the vector map is not generated.

How to know if I have this specific issue?

While you are trying to run the command python3 main.py, you are getting an error similar to the figure below.

How do I fix it?

Ensure that you have followed the rules for creating the vector map given in https://gits-15.sys.kth.se/AD-EYE/AD-EYE_Core/wiki/VectorMapper-Rules. Check whether the roads connected to roundabouts and spiral roads are aligned properly according to the rules.

Issue 7: MATLAB doesn't open from PreScan

What are the symptoms?

When you need to open MATLAB & Simulink from PreScan by clicking on Invoke Simulation Run Mode.

How to know if I have this specific issue?

Try to open MATLAB through Start Button.

How do I fix it?

There are two possible ways to solve this issue.

    • Restart the system. If this method doesn't work then jump to the second possible way.
    • First, Browse to the following folder: C:\Users\adeye\AppData\Roaming\MathWorks\MATLAB
    • Second, Delete or rename the folder for your release of MATLAB, e.g R2020b. Do not remove any folder that ends with "_licenses".
    • Third, Restart MATLAB.

For more information, visit this website: https://www.mathworks.com/matlabcentral/answers/97167-why-will-matlab-not-start-up-properly-on-my-windows-based-system#answer_106517

Issue 8: Error while trying to generate compilation sheet on PreScan

What are the symptoms?

On simulink, you can't generate compilation sheet.

How to know if I have this specific issue?

Matlab send you the error as follow on the picture (it can also be in the terminal):

How do I fix it?

On Matlab, you have to go in the folder of the simulink file you are about to run. To do so, you just have to double click on the folder in the left section.

Issue 9: Error while trying to generate the pointcloud map

What are the symptoms ?

While running the mapping experiment, Simulink prints the following error stating that the pcd file cannot be opened:

How to know if I have this specific issue?

The Pointcloud_Files folder does not exists in your simulation world folder.

How do I fix it?

Create a folder named Pointcloud_Files with the same path as the mapping folder to generate the pcd files.

Issue 10: Issue with pip, the pip version is too recent

What are the symptoms ?

You get an error while trying to use pip.

How to know if I have this specific issue?

You get the same error as shown on this picture:

How do I fix it?

Pip latest version uses "f-strings" which is only supported in python 3.6 and above. To fix it, you can download pip here for the version of python you are using. The setup script should be ran with python get-pip.py for Python 2 or python3 get-pip.py for Python 3. It will install pip and the problem should be solved.

Issue 11: Issue with ARMADILLO library, could not find it (can apply to other libraries)

What are the symptoms ?

You get an error when you try to build Autoware.

How to know if I have this specific issue?

You get this specific error :

--- stderr: lattice_planner
CMake Error at /usr/share/cmake-3.5/Modules/FindPackageHandleStandardArgs.cmake:148 (message):
  Could NOT find Armadillo (missing: ARMADILLO_LIBRARY ARMADILLO_INCLUDE_DIR)
Call Stack (most recent call first):
  /usr/share/cmake-3.5/Modules/FindPackageHandleStandardArgs.cmake:388 (_FPHSA_FAILURE_MESSAGE)
  /usr/share/cmake-3.5/Modules/FindArmadillo.cmake:92 (find_package_handle_standard_args)
  CMakeLists.txt:22 (find_package)

How do I fix it?

You can fix it by using the command sudo apt-get install libarmadillo-dev If you have the same error message with another library, just replace armadillo by the name of the other library in the command lines


One Page Wiki

If you have a issue that is probably listed somewhere on the wiki but you cannot find where use the one page wiki. It is a concatenation of all the other pages to allow easy search using ctrl-f. Once you find the interesting part, the preceding underlined text indicates which pages it comes from.

Useful links

Visual C++ Runtime Library: Runtime Error

Installation of Autoware: Bugs and how to fix them

Issues with graphics drivers and Ubuntu

Many of the issues encountered on Ubuntu connected to the drivers are listed here


Modding-Cities-Skylines-game.md

Summary

Get started

Back to summary

Useful links

Visual Studio setup

  • Download Visual Studio 2019, choosing "Community".

  • Open the "Visual Studio Installer" then. Select .NET desktop development and then select the .NET Framework of your choice.

  • Open Visual Studio 2019 and create a new project, find and select Class Library (.NET Framework)

  • Give the project a name, the location should be here →C:\Users\adeye\AppData\Local\Colossal Order\Cities_Skylines\Addons\Mods. There should be a way to use the location of your choice but so far this method works. Do not forget to tick "Place solution and project in the same directory"

Write a simple mod

  • First, the reference have to be added to the project. Go to References → Add Reference → Browse

C:\Program Files (x86)\Steam\steamapps\workshop\content\255710\530771650 (for PrefabHook - it is a game mod needed to make the mod working)

C:\Program Files (x86)\Steam\steamapps\common\Cities_Skylines\Cities_Data\Managed (for all the important assemblies)

  • After selecting the assemblies (in the picture above the most important ones are highlighted ) add them and tick them.

  • We can now write a litte first mod: There is a class called class1.cs, delete it (or rename it). Right click on the .csproj → Add → New Item → Class → Give your class a name (for instance "MyFirstMod") → Click on Add.

  • Replace all the code with:


using System; using ICities; namespace Wiki { public class MyFirstMod : IUserMod { public string Name => "First Mod"; public string Description => "My First Mod"; } }


  • Right click on the .csproj (here it is called Wiki) go to properties then Build Events copy and paste the following for the post build event:

    _mkdir "%25LOCALAPPDATA%25\Colossal Order\Cities_Skylines\Addons\Mods\$(SolutionName)" del "%25LOCALAPPDATA%25\Colossal Order\Cities_Skylines\Addons\Mods\$(SolutionName)\$(TargetFileName)" xcopy /y "$(TargetPath)" "%25LOCALAPPDATA%25\Colossal Order\Cities_Skylines\Addons\Mods\$(SolutionName)"_

  • IMPORTANT NOTE: Unfortunatly, the post build event sometimes won't work. In that case, copy (From →C:\Users\adeye\AppData\Local\Colossal Order\Cities_Skylines\Addons\Mods\Wiki\bin\Debug)
    and paste (here →C:\Users\adeye\AppData\Local\Colossal Order\Cities_Skylines\Addons\Mods\Wiki\) the .dll (with the name of the project, here it is Wiki.dll)

  • Open the game → Content manager → Mod (or Camera Script). If the mod does not appear, make sure you have copied the .dll to the good place, and restart the game. The mod is likely to appear in "Camera Script" but when some improvement will be made, the mod might change its location to "Mod".

  • Toggle the button On/Off to activate or desactivate the mod (the mod does nothing yet).

Using ModTools for reverse engineering

Back to summary

  • ModTools is a Mod for Cities: Skylines Steam ModTools: For more info

  • By clicking the arrow on top right corner, the objects explorer will be enabled. By hoovering with the mouse on an object, its ID and asset name can be seen. It works for cars, buildings, roads, pedestrians...

  • When hoovering on a car, open the scene explorer by clicking on the left mouse button. It will open all information related to the selected car for instance: his ID, his location, his velocity, the current path ID and many more.The "path" of the selected vehicle appears at the top of the window, it is important when coding because it shows from where the selected objects can be accessed (the class name for instance, for vehicles it will be VehicleManager).

  • Clicking on the road will also open the scene explorer. From there it is possible to have access to the "path", the class is called "NetManager",it deals with the roads. This road is surrounded by nodes, it is possible to have access to the IDs of those nodes. In addition, the left and right roads can be accessible (if they are existing).

  • By clicking on the nodes, you will also have access to the node's information:

  • If we go back to the vehicles scene explorer, one can find something called Path. At the right there is a button called "Show path unit" and clicking on it will show you the following (see below). A path is composed of 12 positions (road segments). When we are at the end of the path, an other path is attributed to the vehicle.

Parameters for data collection

Back to summary

  • There are four parameters displayed here:

  • The opening angle will ignore everything that is not in front of the vehicle, see the angle below:

  • The "Angle moving forward" will ignore the vehicles going the same way that the vehicle collecting data.

  • The "Angle moving backward" will ignore the vehicles going the same way but in the other direction that the vehicle collecting data.

  • And the distance from the vehicle.


nodeArchitectureCPP.md

C++ Node Architecture

This file describe the architecture that C++ ROS Node should to follow in the AD-EYE platform. These rules attend to help producing a readable code.

Overall Notes

The main idea is to have a clean and readable code in order to be easy to maintain. Then every common sense rule of having a clean and organized code is welcome in addition to the following guidelines.

Class

We usually use classes to implement a C++ Node. They should follow the following architecture:

Attributes

Just after the class declaration we define all the attributes (organized as possible). They come first so we can easily know what we are working with.

Note: the private word is recommended (for readability) but not necessary as class keyword sets private by default.

Constructor

Then the first method that should appear is the constructor. It initialize the member ros::NodeHandle reference and all other attributes. Then we initialize the publishers and the subscribers.

Note: Here the public keyword is necessary.

Even if variables are not initialized in the right order, we should clearly distinguish categories at the first look.

Callbacks

Then, we can define every callbacks that will be called by topic subscriptions.

Run method

After callbacks, a running method needs to be implemented. The name of the method should follow the logic of the Node. This method contains main instructions of the Node. It is where the ros::spin() call should be.

Note: Another possible way is to call ros::spin() at the end of the main function and to call this running method inside a callback.

Other methods

Any other function that the Node use, for any purpose will be implemented after.

main function

Last but not least, the main function is implemented. If every previous rules has been respected, it should looks like the following, just creating the Node and eventually calling the run method (If it hasn't been done inside the constructor).

int main(int argc, char **argv)
{
	ros::init(argc, argv, "<NodeName>");
	ros::NodeHandle nh;

	<NodeName> <instanceName>(nh);
	<NodeName>.run();
}

Note: Using a reference attribute to the ros::NodeHandle in the Node is recommended as it avoid having multiple useless copies of the ros::NodeHandle:

class <NodeName>
{
private:
  [...]
	ros::NodeHandle& _nh;

public:
	<NodeName>(ros::NodeHandle& nh) : _nh(nh) {
    [...]
    }
  [...]
};

Nvidia-Drive-PX2-Hardware.md

NVIDIA Drive PX2

There are two releases of the Drive PX2 (DPX2). We posess the DPX2 AutoChauffeur (P2379) version.
NVIDIA DRIVE PX 2 Archive
DriveWorks SDK Reference Documentation

General Specification

Computing:

2 Tegra X2 SoCs

2 Pascal GPUs

CPU (Tegra X2):

2 x 4 ARM Cortex A57 cores

2 x 2 Denver cores

GPU:

2 x Parker GPGPU (Tegra X2)

2 x dGPU (descrete GPU)

Memory:

6.3 GB

Storage:

43.2 GB

Connectors:

An overview of the available connectors together with information on how to connect utilities can be found here.

Other information regarding the Drive PX2 platform can be found here.

Tegra X2 (Parker series)

The board has two Tegra X2 (Parker) System on Chips (SoCs). Each SoC has a coherent multicore processor that has 2 NVIDIA Denver 2 ARM cores and 4 ARM Cortex-A57 cores. The Denver 2 cores each have 128KB Instruction and 64KB Data level 1 cache and 2MB shared level 2 unified cache.

On each SoC is also a GPU implemented which is built on NVIDIA's Pascal architecture. The GPU has 256 CUDA cores and supports all the same features as discrete NVIDIA GPUs.

The SoCs also as other peripherals such as Audio Processing Engine (APE), Always-On Sensor Processing (AON/SPE), Video Decoder and Encoder, Boot and Power Management Processor (BPMP), Safty and Camera Engine (SCE) and more. A functional block diagram and more detailed information could be found in the Technical Reference Manual for the Parker series.

Graphical Processing Units

There are 4 GPUs present on the platform. One in each Tegra X2 SoC and two discrete GPUs both built on NVIDIAs Pascal architecture.

Useful information could be retrieved by running deviceQuery with the following commands in the terminal:

sudo cd /usr/local/cuda/samples/1_Utilities/deviceQuery # go to the folder where src code is
sudo make # compile script
sudo ./deviceQuery # run it

The table below summarizes some of the parameters generated by deviceQuery:

Parameter Discrete GPU Integrated GPU (Tegra X2)
Total amount of global memory 3840 MBytes 6402 MBytes (Shared with CPU)
Multiprocessors (MP) 9 2
CUDA Cores/MP 128 128
CUDA Cores (total) 1152 256
GPU Max Clock rate 1290 MHz (1.29 GHz) 1275 MHz (1.27 GHz)
Memory Clock rate 3003 Mhz 1600 Mhz
Memory Bus Width 128-bit 128-bit
L2 Cache Size 1048576 bytes 524288 bytes
Total amount of constant memory 65536 bytes 65536 bytes
Total amount of shared memory per block: 49152 bytes 49152 bytes
Total number of registers available per block 65536 32768
Warp size 32 32
Maximum number of threads per multiprocessor 2048 2048
Maximum number of threads per block 1024 1024

The full result is stated below:

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 2 CUDA Capable device(s)

Device 0: "DRIVE PX 2 AutoChauffeur"
  CUDA Driver Version / Runtime Version          9.2 / 9.2
  CUDA Capability Major/Minor version number:    6.1
  Total amount of global memory:                 3840 MBytes (4026466304 bytes)
  ( 9) Multiprocessors, (128) CUDA Cores/MP:     1152 CUDA Cores
  GPU Max Clock rate:                            1290 MHz (1.29 GHz)
  Memory Clock rate:                             3003 Mhz
  Memory Bus Width:                              128-bit
  L2 Cache Size:                                 1048576 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 2 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Compute Preemption:            Yes
  Supports Cooperative Kernel Launch:            Yes
  Supports MultiDevice Co-op Kernel Launch:      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 4 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

Device 1: "NVIDIA Tegra X2"
  CUDA Driver Version / Runtime Version          9.2 / 9.2
  CUDA Capability Major/Minor version number:    6.2
  Total amount of global memory:                 6402 MBytes (6712545280 bytes)
  ( 2) Multiprocessors, (128) CUDA Cores/MP:     256 CUDA Cores
  GPU Max Clock rate:                            1275 MHz (1.27 GHz)
  Memory Clock rate:                             1600 Mhz
  Memory Bus Width:                              128-bit
  L2 Cache Size:                                 524288 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 32768
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            Yes
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Compute Preemption:            Yes
  Supports Cooperative Kernel Launch:            Yes
  Supports MultiDevice Co-op Kernel Launch:      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 0 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
> Peer access from DRIVE PX 2 AutoChauffeur (GPU0) -> NVIDIA Tegra X2 (GPU1) : No
> Peer access from NVIDIA Tegra X2 (GPU1) -> DRIVE PX 2 AutoChauffeur (GPU0) : No

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 9.2, CUDA Runtime Version = 9.2, NumDevs = 2
Result = PASS

One-Computer-Setup.md

One Computer Setup

This guide provides the steps to establish a One Computer Setup, which consists of doing a GPU passthrough (or PCI passthrough) and setup a Virtual Machine that will run on the isolated GPU.

Essentially, with the PCI passthrough, one of the GPUs is isolated from the NVIDIA driver and a dummy driver is loaded there instead.

For its part, the VM allows to have 2 OS running on the same computer at the same moment, with great graphic performance (it is not always the case with standard virtual machines without GPU passthrough).

WARNING : Before attempting anything further, we highly recommend to read this guide in its entirety and the links in the references section.

Specifications

This guide has been tested with the following machine:

  • AMD Ryzen Threadripper 2950X 16-Core
  • 64 GB of RAM
  • 2x NVidia RTX 2080Ti
  • Ubuntu 16.04
  • Windows 10 for the virtual Machine

GPU Isolation

Before doing anything, update BIOS to latest available version.

Then, in BIOS :

  • Disable all raid configuration (in Advanced -> AMD PBS)
  • Enable Enumarate all IOMMU in IVRS (in Advanced -> AMD PBS)
  • Turn on VT-d / SVM Mode (in advanced -> CPU Configuration)

NVidia settings

First of all, attach one monitor to a GPU, and another monitor to the other GPU. Then go to the NVidia settings by typing sudo gksu nvidia-settings (the "sudo" is important as we'll change the configuration of the X-server). If the above command does not work, install gksu using sudo apt-get install gksu

Go to the X server Display configuration. Enable the screen connected to the second GPU. Activate the Xinerama setting as shown on the figure below :

Then you click on Save to X Configuration file which pops up the window :

Save and reboot. Before continuing, Ubuntu should show the display on both monitors.


Enabling IOMMU

First, enable IOMMU by modifying the GRUB config: sudo nano /etc/default/grub

and edit it to match:

  • GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on iommu=pt kvm_amd.npt=1" if you run on an AMD CPU
  • GRUB_CMDLINE_LINUX_DEFAULT="quiet splash intel_iommu=on vfio_iommu_type1.allow_unsafe_interrupts=1 pcie_acs_override=downstream" if you run on an Intel CPU

Save (Ctrl+x -> Y -> Enter) Afterwards use: sudo update-grub and reboot your system.

Afterwards one can verify if iommu is enabled: dmesg |grep AMD-Vi (for AMD CPU) dmesg | grep -i iommu (for Intel CPU)

You should get this output for an AMD CPU :

adeye@adeye:~$ dmesg | grep AMD-Vi
[0.885677] AMD-Vi: IOMMU performance counters supported
[0.885727] AMD-Vi: IOMMU performance counters supported
[0.903346] AMD-Vi: Found IOMMU at 0000:00:00.2 cap 0x40
[0.903347] AMD-Vi: Extended features (0xf77ef22294ada):
[0.903352] AMD-Vi: Found IOMMU at 0000:40:00.2 cap 0x40
[0.903353] AMD-Vi: Extended features (0xf77ef22294ada):
[0.903356] AMD-Vi: Interrupt remapping enabled
[0.903357] AMD-Vi: virtual APIC enabled
[0.903695] AMD-Vi: Lazy IO/TLB flushing enabled

Identification of the guest GPU

To list all the IOMMU Groups and devices, run the following command:

find /sys/kernel/iommu_groups -type l

You should get an output of this type:

/sys/kernel/iommu_groups/7/devices/0000:00:15.1
/sys/kernel/iommu_groups/7/devices/0000:00:15.0
/sys/kernel/iommu_groups/15/devices/0000:03:00.0
/sys/kernel/iommu_groups/5/devices/0000:00:14.2
/sys/kernel/iommu_groups/5/devices/0000:00:14.0
/sys/kernel/iommu_groups/13/devices/0000:01:00.2
/sys/kernel/iommu_groups/13/devices/0000:01:00.0
/sys/kernel/iommu_groups/13/devices/0000:01:00.3
/sys/kernel/iommu_groups/13/devices/0000:01:00.1
/sys/kernel/iommu_groups/3/devices/0000:00:08.0
/sys/kernel/iommu_groups/11/devices/0000:00:1c.0
/sys/kernel/iommu_groups/1/devices/0000:00:01.0
/sys/kernel/iommu_groups/8/devices/0000:00:16.0
/sys/kernel/iommu_groups/16/devices/0000:04:00.0
/sys/kernel/iommu_groups/6/devices/0000:00:14.3
/sys/kernel/iommu_groups/14/devices/0000:02:00.2
/sys/kernel/iommu_groups/14/devices/0000:02:00.0
/sys/kernel/iommu_groups/14/devices/0000:02:00.3
/sys/kernel/iommu_groups/14/devices/0000:02:00.1
/sys/kernel/iommu_groups/4/devices/0000:00:12.0
/sys/kernel/iommu_groups/12/devices/0000:00:1f.0
/sys/kernel/iommu_groups/12/devices/0000:00:1f.5
/sys/kernel/iommu_groups/12/devices/0000:00:1f.3
/sys/kernel/iommu_groups/12/devices/0000:00:1f.4
/sys/kernel/iommu_groups/2/devices/0000:00:01.1
/sys/kernel/iommu_groups/10/devices/0000:00:1b.0
/sys/kernel/iommu_groups/0/devices/0000:00:00.0
/sys/kernel/iommu_groups/9/devices/0000:00:17.0

OR the following command to get information on the NVIDIA devices :

(for d in /sys/kernel/iommu_groups/*/devices/*; do n=${d#*/iommu_groups/*}; n=${n%%/*}; printf 'IOMMU Group %s ' "$n"; lspci -nns "${d##*/}"; done;) | grep NVIDIA

You should get an output of this type:

IOMMU Group 16 0a:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:1e04] (rev a1)
IOMMU Group 16 0a:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:10f7] (rev a1)
IOMMU Group 16 0a:00.2 USB controller [0c03]: NVIDIA Corporation Device [10de:1ad6] (rev a1)
IOMMU Group 16 0a:00.3 Serial bus controller [0c80]: NVIDIA Corporation Device [10de:1ad7] (rev a1)
IOMMU Group 34 42:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:1e04] (rev a1)
IOMMU Group 34 42:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:10f7] (rev a1)
IOMMU Group 34 42:00.2 USB controller [0c03]: NVIDIA Corporation Device [10de:1ad6] (rev a1)
IOMMU Group 34 42:00.3 Serial bus controller [0c80]: NVIDIA Corporation Device [10de:1ad7] (rev a1)

I would recommend to store this output into a text file, so you won't have to run this command multiple times.

Every device group must be passed through together. A passthrough of only one of the devices will not work. Each GPU typically also has an audio device associated with it that must also "passed through". In the latest NVidia GPUs, like the RTX2000 series, you will also see a USB bus manager. That is due to the USB-c port on the GPU.

If the iommu grouping is not successful we need to apply acs patch. Click here for instructions


Isolation of the selected GPU

  1. Create the file vfio-pci-override-vga.sh which shall be placed in the /sbin folder:
cd /sbin/
sudo gedit vfio-pci-override-vga.sh
  1. Add these lines to the file (which contain overrides the vfio-pci kernel module to isolated GPU):
#!/bin/sh
modprobe -r -v nouveau
modprobe -r -v nvidia
echo "vfio-pci" > /sys/bus/pci/devices/0000:0a:00.0/driver_override
echo "vfio-pci" > /sys/bus/pci/devices/0000:0a:00.1/driver_override
echo "vfio-pci" > /sys/bus/pci/devices/0000:0a:00.2/driver_override
echo "vfio-pci" > /sys/bus/pci/devices/0000:0a:00.3/driver_override
modprobe -i -v vfio-pci
modprobe -i -v nvidia

Remark : the first GPU in the IOMMU Groups is not necessarely the first on the motherboard. BE CAREFUL TO CHANGE THE IDS TO MATCH ISOLATED GPU.

  1. Create file vfio.conf in /etc/modprobe.d/

sudo gedit /etc/modprobe.d/vfio.conf

  1. Add the following:
##merged blacklists. current config chooses nvidia drivers.
##rmmod and modprobe -i do not work here. move kernel module commands to the installed .sh file
blacklist nouveau
blacklist lbm-nouveau
options nouveau modeset=0
alias nouveau off
alias lbm_nouveau off
#softdep nouveau pre: vfio-pci
#softdep nvidia pre: nouveau ##check if this line or softdeps are required at all.
install vfio-pci /sbin/vfio-pci-override-vga.sh
options vfio-pci ids=0000:02:00.0,0000:02:00.1,0000:02:00.2,0000:02:00.3
  1. make the .sh file executable by running this command :

sudo chmod u+x /sbin/vfio-pci-override-vga.sh

  1. Create separately the file /etc/modprobe.d/nvidia.conf which contains :
softdep nvidia_525 pre: vfio-pci
softdep nouveau pre: vfio-pci
softdep nvidia pre: vfio-pci
softdep nvidia-* pre: vfio-pci
softdep nvidia_* pre: vfio-pci

Note that the first module should contain the version of your nvidia drivers: In this tutorial nvidia_525 for version 525.

  1. Run sudo update-initramfs -u to update your boot image. Reboot.

In case of success, only one of your GPUs will be capable of showing you the login screen, the one that you have not overridden. Else, both screens display information as before.

  1. Run lspci -nk to confirm the isolation. You might get an output as this one:
0a:00.0 0300: 10de:1e04 (rev a1)
	Subsystem: 1462:3711
	Kernel driver in use: vfio-pci
	Kernel modules: nvidiafb, nouveau, nvidia_drm, nvidia
0a:00.1 0403: 10de:10f7 (rev a1)
	Subsystem: 1462:3711
	Kernel driver in use: vfio-pci
	Kernel modules: snd_hda_intel
0a:00.2 0c03: 10de:1ad6 (rev a1)
	Subsystem: 1462:3711
	Kernel driver in use: vfio-pci
0a:00.3 0c80: 10de:1ad7 (rev a1)
	Subsystem: 1462:3711
	Kernel driver in use: vfio-pci

If the isolation is successful, the Kernel driver in use for the isolated GPU is vfio-pci. A failure will show the NVIDIA/nouveau module in use which will mean that you have to debug what went wrong.

Reboot until the command lspci -nnk give you this result (pay attention to Kernel driver) :

0a:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device
[10de:1e04] (rev a1)
	Subsystem: Micro-Star International Co., Ltd. [MSI] Device [1462:3711]
	Kernel driver in use: vfio-pci
	Kernel modules: nvidiafb, nouveau, nvidia_drm, nvidia
0a:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:10f7] (rev a1)
	Subsystem: Micro-Star International Co., Ltd. [MSI] Device [1462:3711]
	Kernel driver in use: vfio-pci
	Kernel modules: snd_hda_intel
0a:00.2 USB controller [0c03]: NVIDIA Corporation Device [10de:1ad6] (rev
a1)
	Subsystem: Micro-Star International Co., Ltd. [MSI] Device [1462:3711]
	Kernel driver in use: xhci_hcd
0a:00.3 Serial bus controller [0c80]: NVIDIA Corporation Device [10de:1ad7]
(rev a1)
	Subsystem: Micro-Star International Co., Ltd. [MSI] Device [1462:3711]
	Kernel driver in use: vfio-pci

Virtual Machine

Prerequisites

Before starting, install the virtualization manager and related software via: sudo apt-get install qemu-kvm libvirt-bin libvirt-daemon-system bridge-utils virt-manager ovmf hugepages

Firstly, if you do not have an administrator who has a licence for Matlab and Prescan, you can just follow the Cloning the VM tutorial and skip the next steps. Cloning-the-VM.

You must download 2 ISO files that will be used later on during the setup :

  1. Download the stable VirtIO: https://docs.fedoraproject.org/en-US/quick-docs/creating-windows-virtual-machines-using-virtio-drivers/index.html

  2. The Windows ISO file has to be downloaded from OneDrive.


Setup of the VM

Open Virt Manager.

We'll first do a VM that will be temporary.

  1. Create a new VM
  2. Select Local install media, and then Forward
  3. Browse the Windows ISO file and select Choose Volume and move Forward
  4. Enter Memory RAM: 4096 MiB, and CPUs: 2, and go Forward
  5. Select Create a disk image with 150 GiB, then Forward (you can name the VM as you want)
  6. Enable Customize configuration before you install, then Finish
  7. In Overview, let BIOS instead of UEFI
  8. In CPUs: In configuration, select core2duo then Apply
  9. In Memory, edit Current allocation to 32136 MiB (which in our case is half the RAM of the computer), then Apply
  10. In IDE Disk 1, change Disk bus to Virtio
  11. In IDE CDROM 1, browse the Windows ISO file (downloaded from Onedrive), then click on Choose Volume
  12. Select Add hardware, go to storage: ● Click on Select or create custom storage, then click on Manage and browse the stable VirtIO file, then click on Choose Volume ● Device type: select CDROM device and then click on Finish.

Remark: Leave all network settings as is (no need to setup a bridge network in our case).

After finishing the previous steps, go to the Boot options, click on VirtIO Disk 1(IDE disk1 before step 10), IDE CDROM 1, and IDE CDROM 2 to enable them. Afterwards, put it in the following order: IDE CDROM 1 > IDE Disk 1 > IDE CDROM 2. Then click on Apply.

Select Add hardware: go to the PCI host device and add the NVIDIA PCIs one by one (you can find them with the IDs used during the Isolation of the selected GPU).

Make sure to remove the internet connection (by deleting the NIC hardware) to make the VM unable to connect to the internet. This will be useful during the Windows installation for it not to ask for a loop.


VM Windows installation

Click on Begin installation Then follow the steps until windows boots on desktop screen :

Click on Load Driver, then Browse. Open CD Drive (E:) virtio -> viostor -> w10.

Click amd64, then OK and then next. From this moment onwards, windows recognizes the partition/drive allocated in the settings (the screenshot is old, as we first tried with 100 Gb, but then had to reset again the VM):

Click on next. Follow the steps for Windows 10 Home installation. Select No for tools asked for installation and make sure to select the Basic installation. Once you've booted in Windows, make sure to re-add the NIC hardware, this should get the internet back.

Then, go to device manager and install the missing drivers from the VirtIO CDROM by following these steps:

You get error code 43 for your GPU, but it is normal. Shut down the Windows VM. This error can occur when the VM detects that the NVIDIA drivers are running in a virtual environment.

Execute the following command to copy the config file of the VM you just set up:

virsh dumpxml temp_VM > New_VM

Replace temp_VM by the name of the temporary VM and New_VM by the name you want for the new VM.

Modify this new xml file with gedit to replace the 3 first lines by these ones :

<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<name>NAME_OF_THE_NEW_VM</name>
<title>NAME_OF_THE_NEW_VM</title>

Replace NAME_OF_THE_NEW_VM by the name given for the new VM. By this way the line defining the uuid of the VM is deleted.

Then copy the following lines between </vcpu> and <os> :

<qemu:commandline>
<qemu:arg value='-cpu'/> 
<qemu:arg value='host,kvm=off,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_vendor_id=whatever'/>
</qemu:commandline>

To have better performance, we'll use hugepages. This feature is enabled by adding the following lines just after the previous qemu:commandline ones:

<memoryBacking>
  <hugepages/>
</memoryBacking>

So the beginning of the xml file looks like:

Important : if you see a path between <nvram> and </nvram> then something went wrong during the installation (Make sure you choose BIOS instead of UEFI as a firmware setting in overview). If you do not find tag, then you can save your changes and carry on.

Execute : virsh define NewXMLfile (We define a new VM from the file we just modified). The output is:

adeye@adeye06u:~$ virsh define New_VM
Domain New_VM defined from New_VM

To ensure that the changes are taken into account, close VirtManager and run sudo systemctl restart libvirtd.

Reboot VirtManager and launch your VM. It should boot on the secondary screen. Make sure Windows detect and use the assign GPU (the one you isolated). If it´s the case then you are almost done with the One Computer Setup!!

Once it has booted on the secondary screen, right click on the Desktop, select Display Settings. Click identify to see the number of each screen and in Multiple Displays select Show only on [the number of the screen on the Ubuntu screen].

Shutdown the VM and assign half the RAM to the VM. For the CPU, check Copy host CPU configuration and manually set CPU topology. For each the RAM and the CPU, in the current allocation, set the maximal allocation.

In windows, make sure it uses the hardware you gave to the VM.

You are done !! Well played !! 🥇


In case of blue screen of death at boot...

If you have a blue screen at startup of the VM, exectute the following command lines :

echo 1 > /sys/module/kvm/parameters/ignore_msrs (root access required)

Then create a .conf file in /etc/modprobe.d/ (for example kvm.conf) that includes the line : options kvm ignore_msrs=1

Set the CPU configuration to "host-passthrough" (enter it by hand as it doesn't exist in the list) in virt Manager.

Also in virt-manager, make sure that the display is configured as "Display VNC" (type: VNC server and Adress: Localhost only).

Once it has booted on the secondary screen, shutdown the VM and assign half the RAM to the VM. For the CPU, check Copy host CPU configuration and manually set CPU topology. In my case, with a 32 threads CPU, I gave 1 socket 8 cores and 2 threads. For each the RAM and and the CPU, in the current allocation, set the maximal allocation (16 cores and 32GB RAM in my case). In windows, make sure it uses the hardware you gave to the VM.

🎊 Congratulations, the setup is finished 🎊

References

http://mathiashueber.com/amd-ryzen-based-passthrough-setup-between-xubuntu-16-04-and-windows-10/ (maybe the most useful if the setup is done with a Ryzen CPU)

http://mathiashueber.com/ryzen-based-virtual-machine-passthrough-setup-ubuntu-18-04/

https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF

https://bufferoverflow.io/gpu-passthrough/

https://blog.zerosector.io/2018/07/28/kvm-qemu-windows-10-gpu-passthrough/

https://heiko-sieger.info/running-windows-10-on-linux-using-kvm-with-vga-passthrough/#The_Need


OpenSCENARIO.md

OpenSCENARIO is a standard format to describe dynamic content in driving situations.

A description of the format can be found here: ASAM OpenSCENARIO user guide.

OpenSCENARIO in AD-EYE


Other-Pages.md

Creating from real world data

Working with City skylines

Other


pointcloud_map_from_rosbag.md

Autoware

Autoware gives the node ndt_mapping, which can be used in order to create point cloud maps from rosbags.

In the Autoware manager, computing tab, then Localization and lidar_localize section. Click on app to tweak parameters.

We can chose to use the IMU and/or the odometry data in order to improve results.

Running the mapping process

A video showing steps to do can be found here : https://www.youtube.com/watch?v=ss6Blrz23h8

Basically, here are the steps:

  1. First, go to Simulation tab and select the rosbag file.

  2. Click Play and then Pause to set rosparam use_sim_time to true.

  3. Go to Setup tab, input parameters related to relative position between the vehicle and localizer, push TF and push Vehicle Model (if you leave the space blank, the default model will be loaded.).

  4. On computing, select ndt_mapping (and eventually, tweak the parameters clicking on the app button).

  5. ndt_mapping will read from /points_raw
    If the pointcloud is being published in a different topic, use the relay tool in a terminal window:
    rosrun topic_tools relay /front/velodyne_points /points_raw

  6. You can visualize the mapping process with rviz. A configuration file can be found in <Autoware>/src/.config/rviz/ndt_mapping.rviz

  7. After finishing the mapping, click the app button of ndt_mapping and input the name of the PCD file and filter resolution, and click PCD OUTOUT. The PCD file will be output in the directory you specified.

Parameters description

Some descriptions are given in tooltips by hovering the mouse over the parameter label.

Here are some extra informations about some parameters:

  • Resolution:
    Too low and it just do not work at all, to high and some tilt errors occurs.

  • Step size:
    When increasing the value, the accuracy is reduced (especially when the car turns).

  • Transformation Epsilon:
    Looks like a tolerance value.

  • Maximum iterations:
    It could be good to increase this value but regarding the output console, the number of iterations is rarely above 10 and the algorithm seems to converge each time.

  • Leaf Size:
    Increasing this value gives better results: lines appears well aligned (windows edges and railing). But it also increase the computation time.

  • Minimum Scan Range:
    Useful in order to remove the car footprint.

  • Maximum Scan Range:
    A error in rotation makes far edges much more misaligned than close ones. So it is better not to have a too high value, especially if the car run everywhere. Also, having long lines may helps to have a correct alignment of the point clouds.

  • Minimum Add Scan Shift:
    Depends on the point cloud density that we want.

  • Method Type:

    • We always used the generic method which works well and is reliable.
    • The Anh method (name of its creator) appears to be more memory consuming.
    • The openmp method (which may be a multi-threading implementation to the generic one) simply do not work properly.

Finally, values on the image above are the ones that gives really good results after tests. Anyway, most of the parameters do not change the results a lot when they are different.

Merging multiple maps

If you want to merge multiple maps provided by multiple mappings, you will have to do it by hand (at the moment, nothing else has worked).

One way to do it is to use rviz :

We want to load two point cloud maps in two different frames and use the static_transform_publisher to move one map against the other.

So, we modified the code of the point_map_loader from Autoware (Autoware/ros/src/data/packages/map_file/nodes/points_map_loader) in order to publish the map in another frame.
You can download the modified node here.\

We use the Autoware Map tab to load the first map (it will be published in the topic points_map and frame map).

Then we use our modified point cloud loader to published the second map in the frame map2.
We run it with the command rosrun pcloader pcloader <topic> <frame_id> <pcd files> ...

Now, the painful part begins, we use the static_transform_publisher to move the second map by hand.
static_transform_publisher x y z yaw pitch roll map map2

When you have found the perfect transformation for your second map, you can apply it to the point cloud (with Matlab for example).

MATLAB

The first thing we try was to create the map with Matlab. Finally, the maps provided by Autoware are way better but here is the methods with Matlab. It will give steps to do and functions to use to manipulate point clouds in Matlab.

We based our work on a Matlab tutorial which explains how to proceed : https://www.mathworks.com/help/driving/examples/build-a-map-from-lidar-data.html
Also, another Matlab page explains how to deal with rosbags : https://www.mathworks.com/help/ros/ug/work-with-rosbag-logfiles.html

Then, we built our own script in order to create maps.

Here are the different steps to consider :

  1. Process the next point cloud to merge with the map.
    This step consists in removing the car footprint, the ground and eventually remove some noise.
  • user-defined function processPointCloud()
  1. Downsample the next point cloud.
    This step is necessary to reduce the computation time and improve accuracy.
  • Matlab built-in function pcdownsample()
  1. Register the next point cloud against the map.
    It is in this step that the NDT algorithm is used. The registration process gives a transformation to apply to the next point cloud in order to place it properly inside the map.
  • Matlab built-in function pcregisterndt() for registering
  • Matlab built-in function pctransform() to apply the transformation
  1. Merge the point cloud with the map.
  • Matlab built-in function pcmerge()

This is the main loop to create a map, some additional things are done in order to improve the result.

First, the point cloud data are really large, so we proceed only a part of the rosbag at a time.

The main improvement consists in the estimation of the transformation. We can estimate a transformation in order to help the algorithm and improve the results.

  • The first approach consists in assuming the car has a constant speed so the next transformation can be estimated as the same as the previous one.

  • Then, we can use some sensors data (in this case the IMU) to estimate the rotational part of the transformation.

This second point is a bit unclear in the tutorial. In our case, we consider the first point cloud as the reference, so every other point clouds needs to be oriented against the first one. So, we use the IMU information of the first point cloud against the information of the current point cloud. Then we have an estimation of the rotational transformation to apply.

In the tutorial, the initial transformation is given as an argument of the registration function. The problem is that, the registration function rely too much on that initial transformation and do not change a lot from it.
So, we are doing things a bit differently :
We apply the rotational transformation (given by the IMU data) to the next point cloud before the registration. The initial transformation given to the registration function consists only on the previous translation part. Using this method, we get the best results as we can with Matlab.


Prescan-code-generation.md

Preparation

1. Install Prescan: (might be already installed)

First, you can verify if you have already installed Prescan on your computer.

ls /usr/local

You can choose in these folders and sub-folders the version you are looking for. But we recommend using the version 2020.4.

https://kth.app.box.com/folder/53896402743

  • Ubuntu: Download the file Simcenter-Prescan-2020.X.0-lin64.sh

To execute it, go in the right path then:

chmod +x Simcenter-Prescan-2020.4.0-lin64.sh 
sudo ./Simcenter-Prescan-2020.4.0-lin64.sh

You can verify once more with ls /usr/local

  • Windows:

[not yet determined]

2. Edit Prescanrun:

  • Verify that the environment variable which defines the Prescan License server is included in the service definition, otherwise include it.
sudo nano /etc/systemd/system/deploymentservice.service

On Prescan 2020.1: [email protected]

On Prescan 2020.3: [email protected]

On Prescan 2020.4: [email protected]

  • Additionally, make sure that the username in the service definition is not "prescan", it should be the default username in the computer.

  • Don't forget to update the version in the ExecStart line.

The service definition should look similar to the following:

[Unit]
Description="Prescan Deployment Service"
Wants=network-online.target
After=network.target network-online.target

[Service]
[email protected]
User=adeye
ExecStart=/usr/local/Prescan_2020.3.0/bin/prescanrun DeploymentService

[Install]
WantedBy=multi-user.target
  • Reload changes and test that the service works
sudo systemctl daemon-reload
sudo systemctl restart deploymentservice.service
sudo systemctl status deploymentservice.service      # just to see if the service is working properly
  • Enable the service on startup
sudo systemctl enable deploymentservice.service

3. Modify build_and_run_experiment.sh:

Download this folder: https://gits-15.sys.kth.se/AD-EYE/prescan_code_generation

Depending on the simulation conventions, it might be necessary to change two constants in the build_and_run_experiment.sh script. For example:

  • For a default PreScan experiment:
PRESCAN_EXPERIMENT_FOLDER="Experiment"
EXPERIMENT_NAME="Experiment"
  • For an AD-EYE experiment like W01_Base_Map_TCP:
PRESCAN_EXPERIMENT_FOLDER="W01_Base_Map_TCP"
EXPERIMENT_NAME="W01_Base_Map_TCP"

Then correct the PRESCANRUN_PATH line below with your Prescan's version (2020.1 or 2020.3 or 2020.4).

Below the line original_path=(pwd), add the line: (in function of your version, see above)

Now you can save this file and close it.

Execution

In 4 separate terminals:

  1. Run roscore
roscore
  1. Run my server (which is a package):
rosrun packageImageRviz myServer
  1. Run the PreScan generated code (which includes the rosbridge TCP client):

Go in the path where you downloaded this.

Then in /prescan_code_generation-master/scripts

./build_and_run_experiment.sh ~/Downloads/W01_Base_Map_TCP_cs.zip
  1. Run Rviz:
rviz

Add an image, and select the topic /topic_image

The simulation will start running.


PreScan-Code-Generator.md

Visual Studio required components

.NET

  • .NET Framework 4.6.1 SDK
  • .NET Framework 4.6.1 targeting pack

Code tools

  • Static analysis tools (optional, outdated?)
  • Text Template Transformations

Compilers, build tools, and runtimes

  • C# and Visual Basic Roslyn compilers
  • C++ Universal Windows Platform tools for ARM64
  • C++/CLI support
  • MSBuild
  • VC++ 2017 version 15.4 v14.11 toolset
  • VC++ 2017 version 15.9 v14.16 latest v141 tools
  • Visual C++ 2017 Redistributable Update
  • Visual C++ compilers and libraries for ARM64

Debugging and testing

  • C++ profiling tools

Development activities

  • C++ for Linux Development
  • Visual C++ tools for CMake and Linux (optional, outdated?)
  • C++ core features

Games and Graphics

  • Graphics debugger and GPU profiler for DirectX

SDKs, libraries, and frameworks

  • Graphics Tools Windows 8.1 SDK (optional, outdated?)
  • Windows 10 SDK(10.0.177763.0)

Back to the overview: Installation

PreScan-Matlab-setup.md

run('C:/Program Files/PreScan/PreScan_8.6.0/prescan_startup.p') setenv('PRESCAN','C:\Program Files\Prescan\Prescan_8.6.0'); setenv('PRESCAN_BUILD','C:\Program Files\Prescan\Prescan_8.6.0'); setenv('PRESCAN_DATA','C:\Program Files\Prescan\Prescan_8.6.0'); setenv('PYTHONPATH','C:\Program Files\Prescan\Prescan_8.6.0\bin\python27.zip;C:\Program Files\Prescan\Prescan_8.6.0\bin\python27.zip'); setenv('PATH',[getenv('PATH') ';C:\Program Files\Prescan\Prescan_8.6.0\bin']);


Raw-documents.md
File
General_diagram.pdf
General_Diagram.vsdx
Vector_map_visio.vsdx
Lane_change-Code_diagram.vsdx
Trafic_light_recognition-Code_diagram.vsdx
Behavior_selector_flowchart.pdf
Behavior_Selector_Flowchart.vsdx
Cost_Calculation_Flowchart.vsdx
Git_diagram.vsdx
Camera_matrix.xlsx
EntryLane.pptx
List_of_monitoring_topics.xlsx
List_of_parameters_from_matlab.xlsx
Params.xlsx
Pex2csv_diagram.pptx
Quick_Start_Files.xlsx
Traffic_lights_in_vector_maps.pptx

RCV-state-machine.md

The RCV state machine is entirely in manager.py in the class ManagerStateMachine where the states are defined in an enum.

Features list containing the keyword ALLOWED in manager.py define which features can be started in that specific state (those features can be started from topic, gui or default feature lists). Features lists containing the keyword DEFAULT define which features are activated automatically for each state as shown in the following code excerpt.

The following code shows those two feature lists for the engages state.

    ENGAGED_DEFAULT_FEATURES = [
        # "Recording",
        "Map",
        "Sensing",
        # "Localization",
        "Fake_Localization",
        "Detection",
        "Mission_Planning",
        "Motion_Planning",
        "Switch",
        "SSMP",
        "Rviz",
        # "Experiment_specific_recording"
    ]
    ENGAGED_ALLOWED_FEATURES = [
        "Recording",
        "Map",
        "Sensing",
        "Localization",
        "Fake_Localization",
        "Detection",
        "Mission_Planning",
        "Motion_Planning",
        "Switch",
        "SSMP",
        "Rviz",
        "Experiment_specific_recording"
    ]

The change of state can be done by publishing a Boolean on a certain topic according to the following diagram.

Rosbags

Rosbags are saved in the path defined in the variable ROSBAG_PATH (not this path is from home as later ~ is added as a prefix)

Bash is used to start and stop rosbags recordings.


Redoing-Git-History.md

Sometimes one needs to redo the history of the branch without changing the history already in Github (i.e., changing the commits messages already in the server). Note that you should never amend the commits in Github, as the purpose of the repository is to keep the history intact.

Below you can find two examples where this situation arises:

  • Changing between AD-EYE computers requires updating the git credentials, in particular the name and email of the developer. When this does not happen, the branch ends up having git commits with the wrong developer name.
  • When commit messages do not follow the right conventions (such as lack of descriptiveness).

Therefore, one way to solve this issue is

  • Go back in history to a previous commit where the issue started using the following command:

git reset --soft a5bb4a58f213ec4d2783f83e1c321b5387c1845c

where a5bb4a58f213ec4d2783f83e1c321b5387c1845c is the commit_ID (that can be checked with the git log).

  • Create a new branch from that point in history:

git checkout -b <new_branch>

  • Delete any files from the git repository (but not from the computer!):

git rm --cached File

  • Start committing desired files and writing new (desired) history:

git add <just the needed files>

git commit


Remote-Desktop-on-the-VM.md

The problem to solve

We want incoming packages on port 3389 (the remote desktop protocol default port) to be passed through to the virtual machine.

Suggested approach

The suggested approach is to load the following iptables rules every minute using a service.

For this approach three files are needed:

The list of iptables rules
# Generated by iptables-save v1.6.0 on Tue Sep 21 13:58:34 2021
*filter
:INPUT ACCEPT [2575:230337]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [2342:151425]
-A INPUT -i virbr0 -p udp -m udp --dport 53 -j ACCEPT
-A INPUT -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
-A INPUT -i virbr0 -p udp -m udp --dport 67 -j ACCEPT
-A INPUT -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT
-A FORWARD -d 192.168.122.0/24 -o virbr0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -s 192.168.122.0/24 -i virbr0 -j ACCEPT
-A FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable
-A FORWARD -d 192.168.122.125/32 -p tcp -m state --state NEW,RELATED,ESTABLISHED -m tcp --dport 3389 -j ACCEPT
-A OUTPUT -o virbr0 -p udp -m udp --dport 68 -j ACCEPT
COMMIT
# Completed on Tue Sep 21 13:58:34 2021
# Generated by iptables-save v1.6.0 on Tue Sep 21 13:58:34 2021
*nat
:PREROUTING ACCEPT [370:37521]
:INPUT ACCEPT [110:11610]
:OUTPUT ACCEPT [31:2046]
:POSTROUTING ACCEPT [0:0]
-A PREROUTING -p tcp -m tcp --dport 3389 -j DNAT --to-destination 192.168.122.125:3389
-A POSTROUTING -s 192.168.122.0/24 -d 224.0.0.0/24 -j RETURN
-A POSTROUTING -s 192.168.122.0/24 -d 255.255.255.255/32 -j RETURN
-A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p tcp -j MASQUERADE --to-ports 1024-65535
-A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p udp -j MASQUERADE --to-ports 1024-65535
-A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -j MASQUERADE
-A POSTROUTING -j MASQUERADE
COMMIT
# Completed on Tue Sep 21 13:58:34 2021
# Generated by iptables-save v1.6.0 on Tue Sep 21 13:58:34 2021
*mangle
:PREROUTING ACCEPT [3397:690868]
:INPUT ACCEPT [2578:230548]
:FORWARD ACCEPT [562:438543]
:OUTPUT ACCEPT [2342:151425]
:POSTROUTING ACCEPT [2906:590215]
-A POSTROUTING -o virbr0 -p udp -m udp --dport 68 -j CHECKSUM --checksum-fill
COMMIT
# Completed on Tue Sep 21 13:58:34 2021
The service definition that will load those rules
[Unit]
Description = Load ip-table rules

[Service]
Type=oneshot
Restart=no
RemainAfterExit=no
User=root
ExecStart= /bin/sh -c 'iptables-restore < /etc/iptables/rules.v4'


[Install]
WantedBy=multi-user.target
The timer that will trigger the service every minute
[Unit]
Description=load_ip_tables_timer

[Timer]
OnUnitActiveSec=1min
Persistent=true

[Install]
WantedBy=timers.target

The file defining the rules and services can be downloaded here: https://kth.app.box.com/folder/146219549500.

The .service and .timer files need to be placed in /etc/systemd/system/ and the timer service needs to be enabled with

sudo systemctl enable load_iptables_rules.timer
sudo systemctl enable load_iptables_rules.service

This command makes the system start the timer and the service service on startup. Once the timmer triggers, it calls load_iptables_rules.service which in turn loads the iptables rules.

The iptables rules need to be placed at /etc/iptables/rules.v4. This can be done with the following command:

sudo cp iptables_rules /etc/iptables/rules.v4

Additional details about how we got there

These details might be useful if the suggested approach above was not successful

Enabling remote desktop by port forwarding

To enable remote desktop access to the windows virtual machine the Ubuntu host needs to forward all incoming connections on the Remote Desktop port (3389).

Typing the following commands with the proper IP adresses will enable the port forwarding.

sudo iptables -t nat -A PREROUTING -p tcp -d 130.237.59.134 --dport 3389 -j DNAT --to-destination 192.168.122.125:3389
sudo iptables -I FORWARD -m state -d 192.168.122.1/24 --state NEW,RELATED,ESTABLISHED -j ACCEPT

IP addresses used in this example:

  • 130.237.59.134: the public IP (google "my ip")
  • 192.168.122.125: address of the Windows VM on the NAT (check ipconfig in Windows cmd)
  • 192.168.122.1 the address of Ubuntu on the NAT (check ifconfig and look for virbr0)

Saving the firewall rules

Reusing the recommended rules is recomended with the services to load them, click here to expand if this is the section you are looking for

The previous commands set up the port forwarding but do not make it persistent. To do so run sudo apt-get install iptables-persistent. During the installation the user will be prompted to save the rules. Choose yes for both IP v4 and v6.

The following commands allow to manually save the iptable rules:

sudo su
iptables-save > /etc/iptables/rules.v4
ip6tables-save > /etc/iptables/rules.v6
exit

However, on startup libvirtd will modify the iptables rules and those modifications can be conflicting with the port forwarding (despite iptables-persistent). That is why a service was created to reload the rules every minutes thus bypassing the startup conflict with libvirts.

Restoring Iptables rules manually

If at some point the remote connection is not working anymore, the rules might have been lost. In that case the problem is solved by restoring them using the following commands:

sudo su
iptables-restore < /etc/iptables/rules.v4
exit

Reset-DPX2.md

Reset DPX2 to factory settings

To the best of our knowledge, NVIDIA is not updating the software support for the DPX2 anymore. Neverthless, methods on this page are guaranteed to work to this day. Neverthless, additional support from the online community (and NVIDIA moderators) can be found in the NVIDIA developer forum for the Drive PX2.

Driver and CUDA

The NVIDIA SDK Manager is provided to flash the board. The needed GPU driver, CUDA and cuDNN are included. The highest version of the SDK Manager to this date is 1.6.1.8175 with CUDA 9.2 . To obtain GPU information, run the CUDA sample in /usr/local/cuda/samples/1_Utilities/deviceQuery.

Install DRIVE with SDK Manager provides the step-by-step installation guide for using the SDK Manager. We provide below our own list of steps.

(Detailed) List of steps

Download and instal the SDK manager on the host computer (also named development workstation) from here. Note that you must be logged in with your NVIDIA account. Beware of the host computer specifications (such as Ubuntu version and HW requirements).

Connect the DPX2 to the host via a USB A-A cable. The USB cable must be plugged in the Debug port. Also make sure both devices have a working internet connection.

  1. To run the SDK Manager on the host, execute the following command:

sdkmanager --archivedversions

and the SDK Manager opens.

  1. Change the Target Hardware to "Drive PX2 AutoChauffeur".

  2. Click "CONTINUE TO STEP 02", agree to the terms and conditions and click "CONTINUE TO STEP 03".

  3. The SDK Manager may fail to download the "DevTools Documentation". However, if the remaining components are sucessfully downloaded, it should not be a problem.

  4. Then a window will pop up, to input the following details of the DPX2:

  5. Plug the PX2 to a display monitor, you need to set up the time, user and other important information.

Congratulations. Your DPX2 has been reset to factory settings!

References


Retrain-a-CNN-with-ssdCaffe.md

Source: https://github.com/Coldmooon/SSD-on-Custom-Dataset

https://web.archive.org/web/20190711074948/https://github.com/Coldmooon/SSD-on-Custom-Dataset

Bug Report:

  • When executing the create_list.sh script, if files are not found, it may be because there are carriage return and line feed in the document created in windows. To fix this issue, add 2 "\r" symbols in the script as follows:
    img_file=$bash_dir/$dataset"_img.txt"
    cp $dataset_file $img_file
    sed -i "s/^/$name\/JPEGImages\//g" $img_file

    label_file=$bash_dir/$dataset"_label.txt"
    cp $dataset_file $label_file
    sed -i "s/^/$name\/Annotations\//g" $label_file
    sed -i "s/\r\r$/.xml/g" $label_file

Source: http://www.yanglajiao.com/article/10km/70144925

WARNING: I had to adapt the fix that this source provides by adding a second "\r"

http://web.archive.org/web/20190712080148/http://www.yanglajiao.com/article/10km/70144925

  • ImportError: No module named caffe.proto

Fix: edit ~/.bashrc and add or modify the python path to add caffe as a Library:

export PYTHONPATH=/home/adeye/ssdcaffe/python:$PYTHONPATH

Source: https://github.com/weiliu89/caffe/issues/536 Source: https://github.com/BVLC/caffe/issues/263 (may be not as useful but check it just in case if the first one doesn't fix the bug)

http://web.archive.org/web/20190712084145/https://github.com/weiliu89/caffe/issues/536

  • No module named google.protobuf.internal

Fix: pip install protobuf

Source: https://stackoverflow.com/questions/37666241/importing-caffe-results-in-importerror-no-module-named-google-protobuf-interna

http://web.archive.org/save/https://stackoverflow.com/questions/37666241/importing-caffe-results-in-importerror-no-module-named-google-protobuf-interna


ROS-message-sizes-in-Simulink.md

The issue

Simulink (more precisely the Robotic systems toolbox) requires to know the size of the messages it sends to ROS.

The simple fix

The simple fix to setting the message size is through Tool, Robot Operating System and Manage Array size.

This allows to set the size for a certain attribute of a specific message message.

The issue is that the size might not always for different occurrences of the same message type. An example is the frame_id field: "/base_link" and "/camera1" do not have the same length.

The elaborate fix

The solution to having different sizes for the same message field can be seen on the following picture (note that only the frameId was kept to make the illustration clearer).


Run-a-simulation.md

Linux computer

Start the manager node:

roslaunch adeye manager_simulation.launch

Simulation computer

After finishing the scenario in PreScan, the model has to be built:

Then, Simulink has to be launched from the PreScan Gui:

Once Simulink finishes loading, the path needs to be the folder containing the experiment:

In that folder, a Simulink file with the same name as the experiment can be found:

After opening the Simulink model, the model has to be rebuilt to reflect the changes made:

Before starting the simulation, the connection with the ROS master must be established using the following command in Matlab:

rosinit('Name_of_the_host_computer')

You can use the name of the computer like so:

rosinit('adeye07u')

Stopping the simulation

To stop the simulation first click on the stop button and wait for Simulink state (indicated in the bottom left corner) to get back to ready. Then do ctrl-c in the terminal on the Ubuntu side. If done the other way, the Simulink terminate phase will take longer.

Posible error 1: Message length in Simulink

Some of the messages sent in Simulink exceed the maximum array size and have to be manually modified the first time that the simulation is going to be executed in a computer or when the length of the message changes (for example, by changing the resolution of the lidar).

The easiest way to find the messages that have to be modified is to execute the simulation and look at the error messages. In the following image, it is possible to see that the "Data" variable inside the "sensor_msgs/Image" message has a wrong length.

The message length can be modified in tools > Robotic Operating Sistem > Manage array sizes.

To modify the parameters, untick use default limits for this message types:

The parameters that need to be changed are:

Message type Array property Maximum length
sensor_msgs/Image Data 2073600
sensor_msgs/Image Encoding 4
std_msgs/Float32MultiArray Data 57600

Posible error 2: Error in the intensity block from the point cloud

Sometimes, when the configuration of the point cloud sensor is modified, the block to output the intensity of the beans is generated in Simulink even if the output of the intensity was not selected in PreScan. In that case, the simulation will give an error. The way to solve this problem is to select again the intensity output in PreScan.

Posible error 3: Simulink stays stuck on initializing

Simulink often takes time to initialize the experiment when pressing the run button. However, it sometimes stays completely stuck on that step (you can consider it stuck after waiting roughly 3 min). If that happens, go back to Prescan and rebuild the Prescan experiment. Then, reopen the Simulink file and regenerate the compilation sheet.

If the problem is still not fixed then something is wrong with the experiment folder, it need to be reset to the latest working version from git.


Run-a-test-automation-experiment.md

The test automation related files are contained AD-EYE_Core/AD-EYE/TA/.

Running a test automation scenario

SSH needs to be enabled on the Ubuntu computer. This can be done using the command sudo service ssh start (if this command does not work a package for SSH must be installed: sudo apt-get install openssh-server). The windows computer needs to have a file containing the parameters for the SSH connection. See AD-EYE_Core/AD-EYE/TA/Configurations/SSHConfigTemplate.csv to have an example.

The test automation can then be ran by using the function TA defined in AD-EYE_Core/AD-EYE/TA/TA.m. This function requires a TAOrder file that describes what should be ran (an example can be found in AD-EYE_Core/AD-EYE/TA/Configurations). It can also take two optional arguments that describe the index of the first and last experiments from the TAOrdeer file that should be ran.

If the test automation is interrupted, the ROS processes might still be running on the Ubuntu computer. The following commands can be run in an Ubuntu terminal:

 rosnode kill -a
 killall -9 rosmaster

General process overview

Windows side

The concept of the test automation is to run multiple simulations in a automated way. Prescan allows test automation with or without rebuilding the Prescan experiment.

The test automation with rebuild offer more maneuverability in the parameters to tune and is therefore the one used. This allows to modify the parameters related to Prescan (such as sensors parameters for examples). Through Matlab code, the Prescan experiment is duplicated in the Results folder of the template experiment. It is rebuilt, its Simulink compilation sheet is regenerated and its Simulink model is modified according to the parameters. This process is done for each of the teat automation configuration (column in the TAOrder file).

Simulink constants can be modified using Matlab code before running the Simulink experiment.

Linux side

To modify the parameters on the ROS side a set of launch files templates is used. In those templates each parameter points to a path on the ROS parameter server. From the Matlab side (TA.m) the wanted values are loaded on the ROS parameter server on the corresponding paths and, then, the template files are modified to replace the path by the value pointed by that path (through AD-EYE_Core/AD-EYE/ROS_Packages/src/AD-EYE/sh/launchTemplateModifier.sh).

The templates are in AD-EYE_Core/AD-EYE/ROS_Packages/src/AD-EYE/template_launch_files while the modified templates are in AD-EYE_Core/AD-EYE/ROS_Packages/src/AD-EYE/modified_launch_files.

The ROS side is launched using AD-EYE_Core/AD-EYE/ROS_Packages/src/AD-EYE/sh/managerFileLaunch.sh which starts the usual manager while setting the parameter /test_automation to true.

Test automation parameters values

The values used for the test automation are defined in csv files or Excel sheets. Those files are all located in the AD-EYE_Core/AD-EYE/TA/ folder.

File name Description
TAOrder Describes which map and which configuration files to load
AutowareConfig Describes the parameters that will be placed in the ROS launch files
SimulinkConfig Simmulink parameters (contains the goal)
SSHConfig Parameters to connect to the Ubunutu computer through SSH
TagsConfig Text file containing the tags for the command line call to PreScan

Editing a test automation scenario

Changing one of the previous files allows to modify the test automation parameters. TAOrder contains a general description of what the test automation will do while AutowareConfig and SimulinkConfig contain the values of each parameter.

To add test automation to a PreScan world or to add new variables test automation must first be activated by clicking on Experiment, General Setting and Test Automation. This menu allows to enable test automation on different actors and on their sensors.

Once test automation has been activated, a label must be added. Right click on an actor and click on Test Automation.

Monte Carlo Sampling

A wrapper was implemented to only run TA automation with a uniformly sampled subset. The Matlab function called MonteCarloTA takes as an input the TAOrder file that contains the full parameter set and samples it to generate the sampledTAOrder subset. This table containing the samples is then written as TAOrder_Monte_Carlo.xlsx and TA is called with that file.


Running-an-OpenScenario.md
  • In AD-EYE/OpenScenario, open TA_OpenSCENARIO_interface.m.
  • The OpenScenario file should be placed in AD-EYE/OpenSCENARIO/OpenSCENARIO_experiments
  • The parameters listed in the following table should be modified to run the experiments are intended.
Variable name Description
EgoNameArray Contains the name of the ego vehicle in the PreScan experiment (usually BMW_X5_SUV_1)
ScenarioExpNameArray Contains the name of the OpenScenario that should be ran (without .xosc or path)
PrescanExpNameArray Contains the name Prescan experiment that will be used as a template
AutowareConfigArray Contains the name of the ROS parameters configuration template file
SimulinkConfigArray Contains the name of the Simulink parameters configuration template file
TagsConfigArray Contains tags that will be added to the PreScan commands to change the weather condition, for example
SSHConfig Contains the name of the file that defines the SSH parameters needed for TA (path relative to AD-EYE/TA)
  • Running TA_OpenSCENARIO_interface.m will generates the OpenScenario .xosc files (\AD-EYE\OpenSCENARIO\OpenSCENARIO_experiments) for all the variations and the related PreScan experiments. It will configure Test Automation and then call it to run what was specified in the original .xosc file.

Details

TA_OpenSCENARIO_interface.m will create multiple Prescan experiments in the Results folder of the OpenScenario version of the template PreScan experiment (for W01_Base_Map the path would be AD-EYE/Experiments/W01_Base_Map/OpenScenario/Results). Then test automation will be called on all the OpenScenraio variants and will generate runs in their Results folders (with our previous example, calling the Openscenario variant variant_1, the path would be AD-EYE/Experiments/W01_Base_Map/OpenScenario/Results/Variant_1/OpenScenario/Results).

Running only the OpenScenario part

  • In AD-EYE/OpenSCENARIO/Code, run OpenScenarioMod.m and API_main.m.

The number of experiments created is accordingly to the size of the array in the main .xosc file.

Running only the TA part

  • In AD-EYE/TA run TACombinations.m and TA.m. The length of EgoNameArray, PrescanExpNameArray and FolderExpNameArray should be equal. TACombinations.m will create a TAOrder.csv with all the combinations of experiments given in the input. How those functions can be used can be found in the TA_OpenSCENARIO_interface.m file.

Running-the-GUI.md

Launch the rosbridge server:
In a new terminal

  roslaunch adeye_gui websocket.launch   

rosbridge_websocket node will start on the terminal:

[INFO] [15611647574.1365610]: Rosbridge WebSocket server started on port 9090 

Launch the web video server:
In a new terminal

  rosrun web_video_server web_video_server

web video server node will start on the terminal:

[ INFO] [1597915696.053120868]: Waiting For connections on 0.0.0.0:8080

Launch the GUI(webpage):
In order to launch webpage, get in the directory where the html page is and then open it using a web browser. Once the simulation starts running, refresh the page to see the output.
HTML file:

 ~/AD-EYE_Core/AD-EYE/ROS_Packages/src/GUI_server/adeye_gui/gui/gui.html  

About the Webpage: The gui folder consists of three major files - gui.html, gui.css, gui.js.

gui.js Connection with ros, subscribing and publishing the topics for messages.

Connection with ros :

var ros = new ROSLIB.Ros({
                      url : 'ws://localhost:9090'
                    });

                    ros.on('connection', function() {
                      document.getElementById("status").innerHTML = "Connected";
                    });

                    ros.on('error', function(error) {
                      document.getElementById("status").innerHTML = "Error";
                    });

                    ros.on('close', function() {
                      document.getElementById("status").innerHTML = "Closed";
                    });

Subscribe to a topic :

var vel_listener = new ROSLIB.Topic({
                      ros : ros,
                      name : '/vehicle_cmd',
                      messageType : 'autoware_msgs/VehicleCmd'
                    });
                    //subscribing to the topic
                    vel_listener.subscribe(function(message) {
                      --------------------
                      ---------------------
                      ---------------------
                     });

Publish a topic :

var faultToggleOn = new ROSLIB.Topic({
                         ros : ros,
                         name : '/fault',
                         messageType : 'std_msgs/Int32'
                       });

                      var fautOn = new ROSLIB.Message({
                         data : 1
                       });
                    faultToggleOn.publish(faultOn);

gui.html Displaying card with various topics.

 <!-- card for tracked objects -->
                <div class="col-md-4 col-sm-12">
					<div class="card">
                         <div class="bcontainer">
                              <div draggable="true" class="box">
                					<h2 class="text-center">Tracked Objects</h2>
                                    <p>Tracked object: <span id="track"></span></p>
                                    <p>X: <span id="x"></span></p>
                                    <p>Y: <span id="y"></span></p>
                                    <p>Z: <span id="z"></span></p>
						    </div>
					    </div>
				    </div>
                 </div>

gui.css Styling the buttons and webpage. Various classes with respective properties are defined for buttons, gauge, box, etc.

.ratingBox{
    margin: 0 auto;
    width: 24px;
    height: 24px;
    background-color: #F00;
    border-radius: 50%;
    box-shadow: rgba(0, 0, 0, 0.2) 0 -1px 7px 1px, inset #006 0 -1px 9px, #3F8CFF 0 2px 14px;}

Running-the-safety-channel-on-another-computer.md

Two steps need to be done in order to run the safety channel (or any part of AD-EYE) on another computer.

Setting the rosmaster IP adress

export ROS_MASTER_URI=http://ROS_MASTER_IP:11311/ It is easier to add that line to the .bashrc file if the usage as a slave will be recurrent.

Adding the adress and hostname to the hosts list

This step is to prevent the error Couldn't find an AF_INET address for [hostname]. The adress of all Ubuntu computers in the ROS network must be entered in /etc/hosts with the associated hostname.


Setup-host-names.md

On Ubuntu:

Edit, with administrative rights, the file /etc/hosts. Write the IP address and the name of the Windows computer.

To find the IP address, go to the Windows computer, run ipconfig in the Windows command prompt (search for cmd). The correct address is the IPv4 Address.

[PICTURE IS MISSING HERE]

On Windows:

Open the file C:\Windows\System32\drivers\etc\hosts, write the IP address and the name of the Ubuntu computer.

[PICTURE IS MISSING HERE]

To find the IP address, run ifconfig in an Ubuntu terminal. The correct one is the virbr0.

Note that virbr0 is for PCs with virtual machines, which happens when you link the Ubuntu host with a virtual machine through a virtual network and not with a physical internet connection.


Setup-Jenkins-server.md

How to install Jenkins

First, ensure to have OpenJDK with Java 8 or 11 installed on your computer.

Then in a terminal, type:

wget -q -O - https://pkg.jenkins.io/debian/jenkins.io.key | sudo apt-key add -

sudo sh -c 'echo deb https://pkg.jenkins.io/debian-stable binary/ > \ /etc/apt/sources.list.d/jenkins.list'


sudo apt-get update -y

sudo apt-get install jenkins -y


To run the software you need: sudo systemctl start jenkins

For the first time you will launch Jenkins, you have to type: sudo systemctl enable jenkins


Now, to open Jenkins you can browse to http://localhost:8080


To unlock Jenkins, you should copy and paste the password found with the following command:

sudo cat /var/lib/jenkins/secrets/initialAdminPassword

On the next page you can install suggested plugins and then choose username and password for the new admin account.


How to setup Jenkins tests

First, to link it to a GitHub repository using a GitHub App, just follow this tutorial: https://github.com/jenkinsci/github-branch-source-plugin/blob/master/docs/github-app.adoc

To create a job, click on New item in the left menu and then create a Freestyle project

Then, in the Build section, choose Execute shell and enter your commands to run tests


How to automate tests

In Manage Jenkins -> Configure System, add https://gits-15.sys.kth.se/api/v3 API URL and API endpoint in GitHub server and in GitHub Enterprise servers


Send a message on Slack at every push

To automate tests after each push, you can follow this video tutorial: https://www.youtube.com/watch?v=Moe2D3Rstc4

Important information: when you create the webhook, for the Payload URL, http://localhost:8080/github-webhook/ and replace localhost by your IP address


To send the results of the tests in a Slack channel, you can follow this tutorial: https://medium.com/@gustavo.guss/jenkins-and-slack-integration-fbb031cd7895


Launch a test at every pull request

For that, you should install Github Pull RequestBuilder plugin. Then you can follow this tutorial: https://devopscube.com/jenkins-build-trigger-github-pull-request/ with some specific information


Because we use GitHub Enterprise, the API URL to enter in system configuration is https://gits-15.sys.kth.se/api/v3

And then you should add the webhook Payload URL as Jenkins URL override

And link it with the GitHub app created previously as credential.


For the webhook, you can't use the same as for push automation. So you have to create a new one with the Payload URL format like in the tutorial.


Setup-the-DNS-on-a-setup.md

Errors caused by the lack of proper DNS on a setup

  • In Matlab, when you run rosinit('name_of_the_linux_computer'), the following error happens: Cannot connect to ROS master at http://lenovo-laptop:11311. Check the specified address or hostname.

  • In the Linux terminal, after connecting Matlab, this message is displayed: Couldn't find an AF_INET address for [DESKTOP-GN0SSH]

Step to resolve the issue

Find IPv4 address and computers name

Setup on Linux

  • In the file /etc/hosts, make sure the Windows computer name is linked with the windows IPv4 address. You can edit the file with the command sudo gedit /etc/hosts. For example: 130.237.57.121 DESKTOP-GN0SSH

Setup on windows

  • In the file C:\Windows\System32\drivers\etc\hosts, make sure the Linux computer name is linked with the Linux IPv4 address. For example: 130.237.57.244 linux-computer. You need administrator's rights to edit the file.

Setup-the-script-to-modify-git-ids.md

Introduction

To avoid identity theft on git when committing and pushing files, two different types of bash scripts have been created:

  • The first one clears the username and email on every AD-EYE's repositories. This script must be launched automatically at every login.
  • The second one set our name and email. It has to be launched manually.

For each type of script, there is a version for Linux and a version for windows. So there are 4 scripts in total

Where can I find those scripts?

The scripts can be found here: https://gits-15.sys.kth.se/AD-EYE/AD-EYE_Core/tree/feature/git_helper_scripts/Helper_Scripts/Git_user_setup_scripts

How to setup the script

Read the instruction written in https://gits-15.sys.kth.se/AD-EYE/AD-EYE_Core/blob/feature/git_helper_scripts/Helper_Scripts/Git_user_setup_scripts/README.md


_Sidebar.md

Home

Info

Execution

Modifying the map

Descriptions


Simulink-Merge-Conflicts.md

Resolving conflicts with Simulink three-way merge

Three-Way Merge is to understand and resolve the differences between two conflicting Simulink design changes.

Creating a conflict and merging

  1. Open MATLAB, the process begins with right-click anywhere on the current folder section and open Branches.

  1. Under Branches section, we can see the current branch name (your branch) on the top left side. After that, choose a selected branch (i.e. intern_dev) under Branch Browser. We can see the differences between your branch and a selected branch on right side section. Carefully check each difference and then click on the merge button.

  1. Now, because of the conflict, you will receive the error.

  1. If you go back to the current folder section, the conflicting files with red conflict symbol will look like:

  1. Right click on the file and select View Conflicts to launch the Three-Way Model Merge tool. You will then be able to see:

Resolving the conflicts.

  1. In the bottom left section of the Three-Way Model Merge, for each block and signal, you can select which version you want to be merged in the target model. For conflicts that cannot be automatically merged, you can manually fix them in the target model and individually mark them as resolved.
  • At the top, Theirs, Mine, and Base columns show the differences in the conflicting revision (selected branch simulink model), your revision (your branch simulink model), and the base ancestor of both files respectively.

  • Underneath, the Target panel shows the local file that you will merge changes into. The Merge tool already automerged the differences it can merge.

  1. In the Target model panel, select a version to keep for each change by clicking the buttons in the Target pane. You can merge modified, added, or deleted nodes, and you can merge individual parameters. The Merge tool selects a choice for every difference it could resolve automatically. Review the selections and change them if you want. Look for warnings in the Conflicts column. Select a button to use Theirs, Base, or Mine for each conflicted item (see the colour symbol for each button section).

  1. Some differences you must merge manually. In the Target panel, look for the manual merge icon in the Conflicts column that shows you must take action. Then, select the check option to mark the node as complete.

  1. Examine the summary table (right besides Target panel) to see the number of automatic merges and remaining conflicts you need to resolve.

  2. Once you are satisfied with the target model, click Accept and Close. Now the conflicts have been resolved for that Simulink model. Repeat this process for each Simulink models in which if you can see the red conflict symbol.


Software.md

Software versions

The latest combination of versions that have been tested together.

Software Version
Ubuntu 16.04
ROS Kinetic: Install ROS Kinetic
OpenCV 2.4.10 or higher
CUDA Adapted to the GPU and below 10.1
QT Qt 5.2.1 or higher
Autoware Current version on included sub-repo
Pex data extraction Current version on included sub-repo
Matlab 2019a
PreScan 2020.1
SketchUp 2019

Credentials

Cities skylines: steam account

Username: naveenm123
Password:  Mechatronics1!

Startup_tasks.md

Windows

Basic

Install PreScan
Open PreScan
Choose menu option file -> open experiment and navigate to the base world simulation pex file
Click build experiment
Chose menu option file -> Invoke Simulink in run mode

Ubuntu

Automated method

#TODO: checkout master instead of TransitionToOrg branch after merge into master
Has not been tested extensively, but the script "Install_AD-EYE.run" in the Helper_Scripts folder should automate the steps shown below.

Meet Dependencies

Meeting depdendencies

Clone to home directory

cd $HOME
git clone URL

Update submodules to right version

cd $HOME/AD-EYE
git submodule update --init --recursive

Build Autoware packages

cd /Autoware_Private_Fork/ros/src
catkin_init_workspace
cd ..
./catkin_make_release

Build AD-EYE packages

cd /AD-EYE/ROS_Packages/
catkin_init_workspace
cd ..
rosdep install -y --from-paths src --ignore-src --rosdistro $ROS_DISTRO
catkin_make

Source the built files

cd $HOME nano .bashrc
Add the following lines to .bashrc
source /opt/ros/kinetic/setup.bash
source $HOME/AD-EYE/AD-EYE/ROS_Packages/devel/setup.bash --extend
source $HOME/AD-EYE/Autoware_Private_Fork/ros/devel/setup.bash --extend


Store-static-grid-map-layer-in-file.md

This page covers the topic of creating a file containing the static object layer of a grid map. This is useful when the experiment contains a UserLibrary Element representing the buildings and their shapes are needed for the grid map.

Many solutions has been tried. The first one is the one that worked fine and that has been implemented. All other solutions are also written here.

From pointcloud (Mapping experiment)

The solution currently implemented implies a .csv file containing all the values of the grid map layer with some meta-data describing the map (position, size, etc...)

This solution implies that there is ONE file for each resolution.

Preparation

The creation of the file starts with the mapping experiment. Some blocks needs to be present in order to save the position of the car each time the lidar saves a .pcd file.

In order to do so, the frequency at which the .pcd files and the position are saved should be the same.
One way to do it is the synchronize the sample time of the saving block with the frame rate of the lidar.

It might also be possible to do it with the Prescan simulation frequency.

In the end, there should as many .pcd files in the Pointcloud_Files folder as rows in the scanPositions.pose timetable.

Creating the file

Once you have the .pcd files and the position stored in the scanPositions.mat file, you can run the matlab script writeStaticObjectMap.m in the TA folder.
You have to specify the resolution of the map.

Note: The script should be run in the Mapping folder of the Experiment.
So, if you run it with F9, add it to the path.

Finally, you just have to move the produced file in a folder named staticObjects next to the Simulation folder.

Here is an example of results we had for the KTH map.

The result can be improved by tweaking some parameters like the shrink factor, when extracting the boundaries of each point cloud. Or even by skipping this step (but it will increase a lot the computation time).

Loading the file

The file is automatically loaded by the GridMapCreator during the beginning of the simulation.
If it finds that there is a User Library Element in the .pex file, it will automatically check the presence of a .csv file with the corresponding resolution in its name. If it can find it, it will load it.

If no file matching the right resolution is found, a error message will be printed, but the simulation will continue.
Also, the .csv file reading doesn't replace the .pex file reading. It is just one more step for creating the staticObject layer.

Note: The .csv information is put into the GridMap before every objects found in the .pex file.
So any building in the Prescan Experiment will also appear in the GridMap.

What does the script do ?

The script reads all the pointcloud files. For each of them, it only look at the ground part (points that are at z = 0) and extract the external points (Those which delimit the boundaries of the point cloud).

Then, for each of these resulting point clouds, it will place them on a occupancy map and trace a line between each point and the center of the lidar. Every cells in which the line passes through are considered as free. Everything else is considered occupied.

After that, the values of the occupancy map are extracted and written down in a .csv file.
Finally, the script write some data to help the GridMapCreator build the layer :

  • The limits of the map (extremes X and Y coordinates in meters)
    The points does not necessarily reach the border of the map. This also gives the position of the occupancy grid in the gridMap.
  • The width and the size of the following occupancy matrix
    Some situation can occurs where calculating these values with the resolution and the map size can lead to an off-by-one error.
  • The resolution of the following occupancy matrix (in meter/cell)

This raytracing method is implemented in Matlab in the buildMap function. But the result of this is a occupancy grid with many different values (not binary). So the values needs to be filtered after process.

Other tested solutions

Other tested solutions that haven't given proper results. These solutions didn't work with the parameters we tested. It is possible that one of those may be working and easy to implement but we didn't have found the correct set of parameters.

From 3D model (.ply file)

We tried with a ros node (https://github.com/ethz-asl/mesh_to_grid_map/) that can convert a .ply file into a gridMap layer.

Starting with a .dae file like in Create a map with real world data It can be converted to .ply with tools like Sketchup or Blender. (We used Blender in this case).

Not sure if necessary but we removed all the visible lines in Blender before exporting.

We also tried to add a plane in Blender to make a ground.
In this case a triangle mesh modifier needs to be applied on the plane. Otherwise, the node crashes if polygons are not triangles.

Then, it seems like only cells under a building have values. Elsewhere nothing appears (maybe a special value like NaN or something).

Problem: working on 3D model implies that the grid placement should be considered (in BOTH translation and rotation (like it should have been done in Prescan)). But, rotation on grid map means make calculations to change cell values. (GridMap and Occupancy map should be aligned with global frame).

If we find a way to get the road network and the 3D model at the same time (from the same tool) so they are both already aligned against each other. This solution might be a interesting one.

Point cloud process with ros grid_map_pcl

The grid_map library contains a grid_map_pcl utility that can create an occupancy grid from a point cloud. It takes every points and evaluates its height to determine if the area is safe or not.

Unfortunately, this kind of method needs a huge density point cloud. So it didn't worked for us, or point cloud is not dense enough and the result is not consistent.

Point cloud process with Autoware

Autoware provide a ros node that process a pointcloud to extract a occupancy map. The node is costmap_generator found in the semantics section of the computing tab in the runtime_manager.

It also didn't worked: we only had the wall footprints, not building footprints.
It was the same king of results than the previous solution but everything except the walls was white.

Ground point cloud projection (Matlab)

One other solution was to take only the first layer of point (on the ground) and assign for each cell that contains a point the white value, everything else is black.

This is very likely the same method as the grid_map_pcl one, so it didn't worked for the same reason and gives more or less the same results.
In fact we only had the white circles and everything else was black.

Ground point cloud to Mesh (Matlab)

We also tried to create a mesh in Matlab, hoping the mesh will stop at the building boudaries.
But Unfortunately not at all.

By the way, the mesh was so heavy we couldn't move the canvas.


System-Modelling-with-Capella.md

Cloning the Capella Repository

Create a new folder locally in the home directory through Git Bash script.

cd ~
mkdir Capella_Folder
cd Capella_Folder

From the link https://gits-15.sys.kth.se/AD-EYE/ADI_Capella, click on the Clipboard to copy the Git url for the repository.

Now that we are in the folder Capella_Folder, we can clone the repository using git clone and the link we previously copied.

git clone https://gits-15.sys.kth.se/AD-EYE/ADI_Capella.git 

This will ask for a Username and Password. Username is your KTH username. Password here refers to your Personal Access Token.

To creat your Personal Access Token, visit the link https://docs.github.com/en/github/authenticating-to-github/creating-a-personal-access-token

Once you have entered your username and password, the repository is cloned.

Opening Capella

The software requires Java jre-8u221 in order to be launched. Open the folder and install Java by clicking on the jre-8u221-windows-x64 file.

Once Java is installed, you can open the software located in# Capella_Folder -> ADI_Capella -> Capella -> Eclipse -> Eclipse Application

Once the application opens, select the workplace folder that contains ADI and RVCE. 


Tools-Setup.md

IDE: Visual Studio Code

The IDE recommended is VSCode and can be downloaded here.

Extensions

Extensions can be installed following these instructions.

The extensions listed below are strongly recommended to work on AD-EYE.

C++

https://marketplace.visualstudio.com/items?itemName=ms-vscode.cpptools

Python

https://marketplace.visualstudio.com/items?itemName=ms-python.python

ROS

https://marketplace.visualstudio.com/items?itemName=ms-iot.vscode-ros

After installing the ROS extension, go to its settings and set the distribution to kinetic.

C++ formatter: clang-format

clang-format should be installed using the following command:

sudo apt-get install clang-format

The installation can be verified with clang-format --version.

Setting up the git hook to autoformat on commit

Make sure your version of git is higher than 2.10 (git --version). If it is not, update it.

Then use the following command inside the repository git config core.hooksPath .githooks. After that, every time the command git commit is ran, the terminal should display Pre-commit hook: formatting using clang-format.

Setting up VSCode to use clang-format

The setting in VSCode should be set as on the following picture:

The code can then be automatically formatted using ctrl + shift + i.

If VSCode ask which formatter should be use, choose C/C++ as shown in the following picture:


Update-git-on-Ubuntu.md
sudo add-apt-repository ppa:git-core/ppa -y
sudo apt-get update
sudo apt-get install git -y
git --version

Use-python-scripts-on-bag_tools.md

For all functions explain after you to have install a catkin worspace with bag_tools

Function merge.py

Open a terminal, go to the location of the script and enter the command python merge.py inputbagfile.bag --output outputbagfile.bag --topics /tf you can give many input bagfile and topics.The topic /tf is an example of topic. The name of the output bagfile is not necessary, if you don't indicate a name the name of the output bagfile will be output.

Fonction change_frame_id.py

On one terminal run ros with the command roscore. On another terminal, go to the location of your files and enter the command rosrun autoware_bag_tools change_frame_id.py -o outputbagfile.bag -i inputbagfile.bag -t topicname -f frameid


Vector-Mapper.md

The vector mapper is a software that generates a vector map out of the pex file created in PreScan. This vector map consists of a few csv files and is used by Autoware for navigation.

Vector Mapper restrictions

To create a simulation world in PreScan the rules explained in the following link should be respected.

Vector Mapper description


VectorMapper-Rules.md

Some rules need to be followed during the creation of the simulation world in order to ensure that it is supported by the vector mapper.

Rules relating to Road Network

If your network is closed (if there are no road section not connected on one of their side), you need to have at least one crossroad in your map. If not, for some reason, autoware doesn't work.

Available RoadType

All road types in Prescan are supported by the VectorMapper.

Roundabout

Two rules need to be followed for roundabouts:

  1. Every exit must be connected to another road (any road type expect X crossing, Y crossing, and Roundabout)

  2. The roads must be connected to the roundabout exits so that their origin is on the roundabout side (see below for a visual)

Spiral Road

The rules are almost the same as the ones about roundabout:

  1. The end of every spiral road must be connected to another road (any road type expect X crossing, Y crossing, and Roundabout)

  2. The end of the spiral road must be connected to the origin point of the next road (same idea as roundabouts).

Rules relating to Stop Lines and Traffic Lights

Stop Line

In order to add a Stop line in your world, use the Road marking called "BitmapRoadMarker" which is a white STOP road marking.

Place that road marking where you want your stop line to be. If you want to put a stop line on a multi-lane road, you'll have to manually put each road marking for each lane concerned by the stop line.

You don't have/need to put the road marking for the X and Y crossing, because stop lines are created when the road type is created.

Try to put your road marking as close as possible from the middle of your lane.

Traffic Lights

When placing a Traffic Light in your world, you have to always make sure there is a corresponding stop line near it.

For now, Traffic Light support for multiple lanes is not fully coded (see trello for more information)


Version-Control.md

General setup

We use the https protocol to access the remote repository. For authentication, the username is the KTH mail address and the password is use generated personal access token. Be careful when copying the token as it can cause an error with line endings. It is strongly recommended to use the copy button next to token.

Pages regarding Git


Version-Control-Process.md

Branching conventions

Guide based on this link

Contribute/Edit code

  1. git checkout intern_dev switches to intern_dev branch
  2. git pull updates the local intern_dev branch
  3. git checkout -b <new-branch-name> creates a new branch
  4. Work on the branch (git add and git commit) and push branch upstream when ready regularly
  5. Once the work is finished and tested, create a pull request for the branch to be merged into intern_dev

Pull requests creation

Before doing the pull request make sure that the feature you have implemented is functional and that it does not interfere with the platform (it does not break existing features). W01_Base_Map should always be functional and all the launch files should point to that map when creating the pull request.

During the pull request creation use the pull request template and fill it in (leave the review part to the reviewer).

After the pull request is created, you will be able to see a diff between both branches. Check all the modified files and make sure that there is no unintended modification.

Pull requests review

All the pull requests should contain the pull request template. This template contains a list of things to be checked by the reviewer during the review process.

Some modifications might be required. In that case, the reviewer can request changes. Once the reviewer has gone through the full review checklist and does not have any changes to request, they can approve the pull request and merge it.

Weekly process

Every week, one (or more) person is responsible for the intern_dev branch. That week, that person must:

  • review all the pull requests raised to intern_dev
  • merge the pull request to intern_dev once approved
  • raise a pull request from intern_dev to dev on Friday and notify Maxime
  • redistribute the requested changes by Maxime to the relevant code owners (if needed use git blame)
  • repeat this process until Maxime approves the pull request and synchronizes intern_dev and dev

The Friday intern_dev-dev pull request is high priority and all people that have to make changes should put their current task on hold.

Branch protection

  • Pushes to the master and dev branch are disabled.
  • All integration to master are done via pull requests assigned to Naveen.
  • All integration to dev are done via pull requests assigned to Maxime.

Video-creation.md

Video template:

Music:

https://www.youtube.com/audiolibrary/music?ar=1567183009095&nv=1


Working-on-autoware.md

Repositories

autoware is split into multiple repositories:

  • common
  • core_planning
  • core_perception
  • ...

Most of those repositories should have a dev branch which was made for AD-EYE. If there is no dev branch yet, please create it.

To work on one or several of these, branch out from dev and commit to the created branch. Once the task is done create a pull request to dev.

Compiling autoware

While working on autoware, compilation will be needed to see the result of the modifications. To do so use the following commands:

cd
cd AD-EYE_Core/autoware.ai
AUTOWARE_COMPILE_WITH_CUDA=1 colcon build --cmake-args -DCMAKE_BUILD_TYPE=Release

⚠️ **GitHub.com Fallback** ⚠️