Autonomous Driving Intelligence (ADI) - AD-EYE/AD-EYE_Core GitHub Wiki
Architecture
In the current state the Autonomous Driving Intelligence (ADI) is composed of two separate control channels:
- The nominal channel
- The safety channel
The ADI is entirely running using ROS as a middleware. The channels define different frames that are described here.
The nominal channel
The nominal channel is the channel in control during nominal conditions. It is based on the open source project Autoware.
The general diagram representing the ROS nodes can be found here.
A deeper description of the behavior selector can be found here.
A deeper description of the detection stack can be found here.
The safety channel
The safety channel was entirely developed at KTH. Its role is to monitor the nominal channel and to make sure it stays within its operational envelope.
It has a basic perception stack that uses a euclidean clustering node on the Lidar's data and adds the hull of of the detected clusters to a layer of the gridmap. The safety supervisor uses fault monitors to monitor different types of faults. This gridmap is then flattened and used by the safe stop motion planner (SSMP).
ROS structure and features
To start the different components several launch files were made. Those files correspond to features of the ADI.
Feature name |
---|
Recording |
Map |
Sensing |
Localization |
Fake_Localization |
Detection |
Mission_Planning |
Motion_Planning |
Switch |
SSMP |
Rviz |
Experiment specific recording |
The manager node is in charge of starting and stopping those features according to a state machine.
Model of the Platform
To see a representation of the system, refer to https://gits-15.sys.kth.se/AD-EYE/ADI_Capella where the whole platform has been modeled.
This model can be compared with what is running in real time using the rqt_graph
command (see here for more information).