Benchmark Models - SvenGastauer/scatmod GitHub Wiki
Benchmark Tests of all models should be completed.
To allow for cross-language benchmark tests, it is recommended that Script of Scripts (SoS) Jupyter notebooks are used.
Currently R, Python and Matlab (as well as others) are supported by SoS.
For best visualisation inside GitHub, it is recommended to save benchmark tests as Python Notebook (*.ipynb) in the *
Python* folder as well as Markdown document (*.md) file in the docs folder.
Minimal benchmark tests should include:
- Comparison of the model output for simple shapes, such as spheres and or cylinders with analytical solutions
- cross-language model output tests (e.g. absolute difference of TS for given frequencies)
- Evaluation of the time needed to run the model for all available languages and for a single simulation (single frequency, single incident angle) as well as for a range of simulations (e.g. over Frequency and or incident angle range)
Additional tests can be added.