Working with the optimizer
Parameters can be specified for the optimization at runtime in a JSON file as such:
runpenrose --config=opt-config/test.json sub/twosets.sub sty/transform-rect.sty dsll/setTheory.dsl
A configuration looks like this:
{ "optMethod": "BFGS" }
If no configuration is specified, the system uses opt-config/default.json
.
Available parameters can be found in the OptConfig
type defined in GenOptProblem
.
All functions involved in the optimization are passed a config
parameter containing the configuration values passed in at runtime.
L-BFGS has a tunable parameter defaultBfgsMemSize
: the number of correction vectors to store (higher m = more memory). The number is empirically determined (see Nocedal). With m=0
it reduces to gradient descent; if numSteps < m
then it's Newton's method. Try changing m
if the optimization is behaving weirdly.
TODO
- optimization policies
- output
- weights
- debugging
The asymptotic behavior of the function can matter a great deal (e.g. x^2 vs x^4), as seen here.
Found a problem or got a suggestion? Please open a GitHub issue and tag it with documentation
!