Software Architecture - mipidisaster/miLibrary GitHub Wiki
This page will go through my suggestion on what architecture is to be used for any embedded system that I will be implementing. The intent is to have a architecture which can be used for both dedicated hardware (i.e. miStepper), as well as something implemented within the Robot Operating System. The main goal of the architecture is to have it moduler such that I should be able to test parts in a MATLAB/SIMULINK environment, or within a standalone ROS environment.
Embedded System Model
So as to align with the goal of having the system moduler, the model best to use is one were the top most layer is what will be interfacing with the user, and the bottom most layer will be interfacing directly with the hardware.
Top Layer -> Application Layer
Middle Layer -> System Layer
Lowest Layer -> Hardware Layer / OS
The bottom most layer will be the most hardware specific, as this would need to be tailed for what device the system is embedded on, as well as what interfaces exist. Whereas the top most layer, will have the higher level of abstraction from this; therefore would be working with real-world parameters (speed, distances, etc.). Can therefore be put on any device; so long as similar inputs are provided.
As I want to have some level of fault detection and annunciation; as I want to be aware of any issues that the system may experience. I want to have a dedicated "sub-layer" within the System layer to cater for common/shared resources in the system. Along with fault detection, it will also manage a level of logging (ROS) as well as external communication. So the model will look more like:
To aid in seperating out modules/components/nodes, each will have a layer file (.h/.cpp) indicating the data which is made available to other systems for consumption. Anything which is NOT captured within these interface/boundary files is not available for utilization with other nodes. Additionally, as I intend to use this within either a RTOS or scheduler system, I will be implementing "light" semaphores with each data that is shared externally.
Obviously the nodes which generates the data will be the owner of this, however there is likely to be a scenarios were I want to reset this to some value from another component. This is were the semaphore will come in handy :D
So updating the above diagram with this layer/boundary gives:
ROS Architecture
The intended way that this will work within the ROS environment is shown in the diagram below:
The topics from each node need to follow the below naming system
< Embedded Device Source > / < Source - HAL/SYS/APP > / < Topic name >
Where - HAL is the "Hardware Arbitration Layer", SYS is the "Systems Layer", and APP is the "Application Layer" I will need to look into how to set this up, as I don't want this naming to exist within the code, I would like this to be part of the launch file.
This way it is possible to figure out where the signal has been sourced from. The intend is that the HAL signals can only interface with the System Layer/Shared Resource Layer, and that the Application Layer can only take signals from the System or Shared Resource Layer.
The above example demonstrates the power of this setup. As the "Derived Angle" node, can be placed anywhere within the system, doesn't necessarily need to be within the embedded device. The C code, or python code can also be tested out-of-system (i.e. in MATLAB/SIMULINK) with simulation signals prior to putting it into the system.
Folder structure
Will need to look into the specifics of this for ROS, however for embedded devices the intended folder structure is as follows
<project root>
├─ 0_hardware_drivers
│ ├─ spi_dev_driver
│ │ ├─ spi_dev_driver.h
│ │ ├─ spi_dev_driver.cpp
│ │ ├─ spi_dev_driver_parameters.h
│ │ └─ others...
│ └─ others...
├─ 1_hardware_arbitration_layer
├─ 2_system_layer
├─ 3_shared_resource_layer
├─ 4_middle_interface_layer
├─ 5_application_layer
└─ 6_application_interface_layer
This way, as you go down the folder structure the abstraction increases!!