Using the Fuzzing Mechanism - nikolaradulov/SLAMFuse GitHub Wiki
SLAM Benchmarking Tool: Dataset Fuzzing Mechanism
Overview
This SLAM benchmarking tool incorporates a fuzzing mechanism tailored specifically for SLAM algorithms. Fuzzing involves introducing perturbations into datasets dynamically at runtime, which are then fed to the algorithms. This approach does not require additional storage for modified datasets and incurs no overhead on the algorithm under test, as modifications are performed externally.
Note: Currently, the fuzzing functionality is only available on the
image_filter
branch due to a bug specified in issue #31.
Features
- Dynamic Perturbations: Introduces changes to datasets at runtime without needing extra storage.
- No Overhead: Modifications are external, ensuring no impact on the algorithm's execution time.
- Configurable: Uses JSON configuration files to specify perturbation characteristics.
- Per-Frame Customization: Perturbations can be customized on a per-frame basis for flexibility.
Installation
-
Clone the repository and checkout the
image_filter
branch:git clone https://github.com/your-repo.git cd your-repo git checkout image_filter
-
Install dependencies:
pip install -r requirements.txt
Changes to Algorithm Implementations
To utilize the fuzzing mechanism with your own SLAM algorithms, you need to implement a counter system that tracks the frames consumed by the algorithm. This is essential for the mechanism to correctly apply perturbations to the appropriate frames.
Implementation Steps
-
Frame Counter Initialization:
- The
frame_counter
is an attribute of theSLAMBenchLibraryHelper
class and is used to keep track of the current frame being processed.
- The
-
Incrementing the Frame Counter:
- The frame counter must be incremented each time
sb_process_once
is called within the algorithm OR whenever else the algorithm utilises a frame - Add the following line to the
sb_process_once
function to increment the counter:
bool sb_process_once(SLAMBenchLibraryHelper * slam_settings) { slam_settings->frame_counter++; // Existing processing logic }
- This ensures that the fuzzing mechanism correctly identifies the current frame and applies the appropriate perturbations.
- The frame counter must be incremented each time
-
Refer to Test Implementation:
- You can refer to the
test_library
for an implementation example on how to properly integrate this functionality with your algorithm.
- You can refer to the
Example
Here’s how your sb_process_once
function might look:
bool sb_process_once(SLAMBenchLibraryHelper * slam_settings) {
// Increment the frame counter
slam_settings->frame_counter++;
// Your SLAM algorithm's processing logic
// ...
return true; // or false based on your processing outcome
}
This small addition is crucial for the correct operation of the fuzzing mechanism, allowing it to accurately track and perturb the data fed into your SLAM algorithm.
Usage
Configuration
Create a JSON configuration file specifying the perturbations. The configuration file should define:
- Frames from which sensors should be modified.
- Frames to be modified by ID.
- Settings for different filters to be applied (e.g., brightness, contrast, blur).
Example configuration:
{
"frames": [
{
"id_range": [1, 900],
"camera": [
"brightness"
],
"sensor_settings": {
"camera": {
"mean": 100,
"standard_deviation": 10,
"kernel_size": 1,
"brightness": 100,
"contrast": 0
}
}
}
]
}
Running the Fuzzing Mechanism
-
Ensure you are on the
image_filter
branch:git checkout image_filter
-
Run the benchmarking tool with the fuzzing mechanism:
python benchmark_tool.py --config path/to/your/config.json
Experiments
Image Quality and Breakpoints
We conducted experiments to assess the impact of image perturbations on the performance of visual SLAM algorithms. The following filters were applied:
- Brightness: Range from -255 to 255 with increments of 25.
- Contrast: Range from -255 to 255 with increments of 25.
- Blur: Kernel size range from 1 to 10 with increments of 1.
Experiment Setup
-
Selection of Algorithms and Datasets:
- ORB-SLAM3
- LSD-SLAM
- ElasticFusion
- Datasets: TUM Freiburg 1 xyz and TUM Freiburg 1 Desk
-
Perturbations:
- Applied to random 1-second sequences, affecting up to 10% of the trajectory.
- Experiments repeated 5 times per filter value, with errors averaged.
-
Results:
- Analyzed the impact on each algorithm, observing error patterns and robustness.
Results
-
Brightness Perturbation:
- General U-shaped error curves.
- LSD-SLAM showed faster error escalation.
- ORB-SLAM3's response varied by dataset.
-
Contrast Perturbation:
- Similar U-shaped error patterns.
- Differences in robustness across algorithms and datasets.
-
Blur Perturbation:
- Measured number of ORB features detected and error in ORB-SLAM3 using KITTI dataset.
Contributing
- Fork the repository.
- Create a new branch for your feature or bugfix.
- Commit your changes.
- Push to the branch.
- Open a pull request.
License
This project is licensed under the MIT License. See the LICENSE file for details.
Contact
For any questions or issues, please open an issue on GitHub or contact the maintainers.
Note: Remember to address the bug in issue #31 to merge the fuzzing functionality into the main branch.