Ability to pre-load / cache all required skims now works and is checked-in to the master branch
Pre-stacking (building a 3D array of skims from a list of 2D skims by time period) in order to save significant runtime and memory usage is checked-in as branch stack-injectable
When solving the MNL model, the table of choosers is merged with the number of alternatives being considered and this can require too much RAM. We added chunking within the MNL model and will try a full model run again. This is being discussed here and is branch chunked-interaction_simulate
Team agreed to go ahead and implement logging since it will be helpful for working through the various issues we've encountered; this is the start of Task 9 tracing
Continue to work through issues that arise and keep track of architecture design issues for a group discussion later this year
Next Steps
Run full model with chucking and report memory usage, runtimes, etc. if successful