Discuss tradeoffs between benchmark runtime, stability, and applicability.
Notes from the discussion:
ASV looks great
Benchmark files will be stored in a new repo - activitysim_benchmarks
Only coded to work for MTC TM1 currently, but can extend for other regions assuming they are in activitysim_resources
Only want to benchmark key commits, such as version releases
Want to do in develop so we find issues before releasing (unlike with the chunking issue)
ASV therefore becomes part of the release process
Do benchmarking on your machine and then upload benchmark files to activitysim_benchmarks
Not run by Travis since runs models several times and runs a big enough sample (100k HHs for example) to provide meaningful analysis that is too big and runs for too long for Travis
Added new benchmarking subpackage that will need review
This setup doesn't help much with figuring out hardware purchases since that requires benchmarking your full scale model under different configurations (although you could use this tool to help organize the analysis)
Newman to work with PSRC and SANDAG to test it out
Newman working on completing the feature and documentation as well
Would be good to add drive type - SSD / disk and memory utilization