meeting 2025 04 10 n57 - JacobPilawa/TriaxSchwarzschild_wiki_5 GitHub Wiki

Context

  • We added a few scalings (0.98, 0.99, 1.0, 1.01, 1.02) to our most recent rejection sample grid to verify that we are sampling the minimum well. Here's a quick reminder of the models we have run so far:

    • First 2200 models are from rerunning our old grids centered at the old minimum. We added scalings for s = (0.95, 0.98, 0.99, 1.0, 1.01, 1.02, 1.05)
    • The next 1000 models were built from rejection sampling off of the "best scaled" version of the original 2200 models.
    • The final 1000 models are built from rejection sampling off the "best scaled" 2200 models + the 1000 rejection models just above. For these 1000 points, since they seemed to nicely straddle the minimum, we added scalings for s=(0.98,0.99,1.0,1.01,1.02).
    • Thus the total number of models we have is ~4200.
  • Takeaways:

    • All in all I think things are coming along well!
    • These newest scalings did not appreciably change the parameters. In fact, the newest grid's base models are consistent with the GPR and dynesty results using the best-scaled version, which are also consistent with using the full set of ~4200 models.
    • Although the 1d panels look "imbalanced," I don't think this is impacting our parameters since we're making a cut of K=~60 anyway, and our 1d panels within a delta chi2 of 60 are quite balanced (since most of the points are coming from our latest, more balanced grids).
      • This also can help explain why the newest grids have such similar parameters -- basically all of the newest grid's models are within ~40 of the minimum, so extending K beyond 40 is not picking up any new points in the space.
  • Moving forward:

    • If we're still a bit uncomfortable with the current level of sampling in the space, we could always add a few scalings to our first rejection set of models which we have no scalings for currently. I don't think this will change anything either since the newest grid is already exploring the proper parts of the space, but this is the most natural next step if we decide to go that route.
    • Alternatively our production level plots using the All Models Best Scales case look quite nice. I think we could likely move forward with what we have now.

Diagnostics

  • First, here's a cumulative distribution of our "best scaled" set of 4200 models, as well as few different set of 1d panels.
Cumulative
All Models (Best Scales) Newest Grid All Scales Newest Grid Best Scales
NOTE: This panel was bugged! Check the bottom of this page for details and for the corrected set of panels!
  • And here's a quick comparison of the chi2 histograms to see the improvement from each grid:
Chi2 Histograms
  • I also ran GPR + dynesty on a few subsets of the data for a few different K. Note that these all use nu=1.5 since that's typically what we've used in our published results. I've included a vertical plot first comparing the different cases, and the full set of posteriors just below:
    • In the vertical plot, the lowest three points use all of our models/grids, whereas the top 3 points limit ourselves to only the most recent grid of points that we ran scalings for yesterday.
    • Also note that the newest grids models have effectivelly all of their models within K of 40 of the minimum, so there's hardly a difference in the K=40 vs. 60 vs. 80 cases since they use all the same model points.
Vertical Plot
K=40 K=60 K=80
All Models (Best Scales)
Newest Grid Best Scales Only
Newest Grid s=1.0 Only
Newest Grid All Scales
  • And lastly, here's a "production" version of our cornerplots. This version specifically uses the "All Models, Best Scales" K=80 set of posteriors. Note that I re-ran this case with nIter=8, whereas the top version of the plot is nIter=1. This looks nice!
Production Cornerplot

Quick Email Follow Up

  • Chung-Pei had noticed that 1d panels above appeared to have some inconsistent points between the All Scales and Best Scaled panels, and this is in fact the case. I made a silly mistake in generating the "newest grid all scales" plot and applied the scale factors twice when creating this plot.

    • For a bit more detail: my analysis script is set up to combine and analyze all the best-scaled models, whereas I have to be a bit more manual in creating the "all scales" plot currently. I had saved all the scalings to a single file for convenience yesterday (normally we don't run GPR/dynesty on all the scalings together).
    • When creating the newest grid all scales plot, I had applied a second scale factor correction to the already scale-factor corrected points, effectively applying scalings twice to the data.
    • The net effect is that the mass parameters are too stretched by a factor of scalefactor^2
  • Here's a plot which hopefully makes this is a bit clear:

    • The first 5 columns are the scalings plotted individually
    • The sixth column show the first 5 columns on a single plot
    • The seventh column shows the "best scaled" models only
    • And the last column shows lowest 25 models but connects the adjacent scalings to see the "scallop" shapes
Plot
For future reference, here's the "newest grid all scales" panels that had a bug in them in the table near the top of this page. I also included a corrected version which does not have this bug.
Bugged Panels Corrected Panels

A few best fitting plots

  • Wanted these to prepare for the meeting:
Radial Moments Heat Map

⚠️ **GitHub.com Fallback** ⚠️