meeting 2024 11 26 n315 - JacobPilawa/TriaxSchwarzschild_wiki_5 GitHub Wiki
-
I've re-run two sets of 11 models, with the only difference between the two sets being the MGE.
- Note that in the plots below, "MGE A" is the MGE we have been using, and MGE B is the MGE associated with the best-fitting parameters (AJ = 0.75 mag) in the NGC 315 paper.
- I ran the 10 lowest (base) model points from our grid along with the set of "best parameters" from our posteriors. The resulting NNLS chi2s are largely in agreement, though there is a bit more scatter than I might have expected from individual models. However the stats across the 10 realizations are quite nice:
- The mean difference between the two sets is -0.77 (MGEA is better fitting on average), and the median between the two sets is 0.0143. This seems to suggest to me that there might be some "noise" when using one MGE vs. the other, but I wouldn't expect the landscape to dramatically change since the chi2s are not egregiosuly different and the aggregate statistics seem alright. This is further confirmed in the plots below showing the current set of best scaled models, with these two sets of 10 models plotted on top. The three sets' general trend are quite consistent with one another.
- I've also plotted some heatmaps below showing the difference in chi2 per bin/per moment for the two sets of models. My hope here was to see if the two models are systematically different in a given bin/moment, but to me, it looks like there is a good deal of random scatter per bin/per moment rather than anything systematic.
- I think generally speaking the agreement here is decent and we're fine to move on. I could also be convinced (given the somewhat large difference between the models individually) it might be worth submitting a very small grid of 100 models (over say BH x ML) to make sure the minimum doesnt change. I could go either way on this suggestion -- it really just depends on how much we want to read into individual realizations vs. the aggregate stats.
-
First, here's a plot showing the chi2s plotted against one another, colored by parameter. I think that this plot, while accurate, is a bit scary to look at in isolation given some of the differences here:
Chi2 Comparison |
---|
![]() |
- And here's a quick plot showing the current set of best scaled models (in green) compared to the chi2 from MGE A and MGE B. There's a nice agreement of our newest results with the old results, and it'd be hard for me to tell the three sets apart from one another:
1d Panels (with Best Scales) | 1d Panels (No Best Scales) |
---|---|
![]() |
![]() |
Expand this bullet for the heatmaps -- in these plots, I'm plotting the difference in chi2 associated with each bin and each moment. The blue/red indicate a difference of 2 in chi2, and my hope was to see if there were any systemtic differences between the two fits. This doesn't seem to be the case to my eye. It really seems like there is a decent balance of red and blue cells, and there aren't systematic issues that jump out to my eye.
Model 1 | Model 2 | Model 3 | Model 4 | Model 5 | Model 6 | Model 7 | Model 8 | Model 9 | Model 10 | Model 11 |
---|---|---|---|---|---|---|---|---|---|---|
images/241126/chi2_diff_0-1.png | images/241126/chi2_diff_1-1.png | images/241126/chi2_diff_2-1.png | images/241126/chi2_diff_3-1.png | images/241126/chi2_diff_4-1.png | images/241126/chi2_diff_5-1.png | images/241126/chi2_diff_6-1.png | images/241126/chi2_diff_7-1.png | images/241126/chi2_diff_8-1.png | images/241126/chi2_diff_9-1.png | images/241126/chi2_diff_10-1.png |
- And here's a quick plot I made just before the meeting showing the enclosed light for the two MGEs, the deprojected luminosity densities, and the ratio of the enclosed light. It seems like there is a decent disagreement between the two MGEs in terms of enclosed stellar mass:
- I was also curious how the enclosed mass for B3 compares, and this is much more closely in agreement with A/B1 than B2. There's something weird about B2 potentially?
Plot | Ratio | A vs. B3 |
---|---|---|
![]() |
![]() |
![]() |
- Context and Takeaways:
-
MGE A vs. MGE B vs. MGE C
- Looking at the 1-to-1 scatter plots below, you can see that we still seem to have a big deal of scatter between the NNLS chi2s for MGE A vs. B vs. C. This is especially confusing since the mass profiles between MGE A and C were especially similar except for in the very central region. Despite this scatter, the general shapes of the landscapes seem to be in agreement.
- One very interesting bit of information -- it seems like the kinem chi2s between the three MGEs are much more closely aligned than the NNLS chi2s. I can clearly see a linear trend (with somewhat of an offset) between the MGEs, but they do seem to better track each other. The same is true when we include dummies -- it seems like there is better agreement when we INCLUDE the dummies vs. excluding them.
-
Central Bin Issue
- I'm still running into a very weird issue with the central bin, where h3 through h8 do not appear to be being read properly. This seems like it's something internal to TriOS since swapping the first line with a line that is known to work (the second kinematic bin) doesn't seem to fix the issue. The kindata.dat files themselves seem to be fine, too, and I never get any errors along the way which makes it all that more myserious. While this is strange and I'll sort it out, I doubt it greatly impacts anything since, at most, this would shift the chi2 by ~5 or so if each moment were poorly fit.
- In short, the data are being fed in via kindata.dat in a totally standard way/nothing strange about the first bin line. It seems like everything gets read from TriOS properly, but then the triaxnnls_kinem.out file (which comes from the NNLS fit) suddenly has the input equal to 0 for h3 through h12, and an undertainty equal to 1e31 for h3 through h12.
- It looks like this gets assigned near line ~450 in the triaxnnls code? but I can't see why it would have trouble reading h3 through h12 for the first bin alone from this line.
-
Chi2 Comparison |
---|
![]() |
- And here's the 1d panels for the parameters, along with our current set of best models:
1d Panels (with Best Scales) | 1d Panels (No Best Scales) |
---|---|
![]() |
![]() |
Expand this bullet for the heatmaps for MGE A vs. MGE C -- in these plots, I'm plotting the difference in chi2 associated with each bin and each moment. The blue/red indicate a difference of 2 in chi2, and my hope was to see if there were any systemtic differences between the two fits. This doesn't seem to be the case to my eye. It really seems like there is a decent balance of red and blue cells, and there aren't systematic issues that jump out to my eye.
- I was curious if the NNLS chi2 and kinem chi2's were appreciably different since, up to now, we've been comparing the NNLS values between models. One very interesting bit is that the kinem chi2s between the three MGEs seem to be in better agreement (in particular where I'm including the dummy moments):
- NOTE THAT THESE ARE LABELLED NNLS BUT ARE KINEM!
MGE A vs. B | MGE A vs. C | |
---|---|---|
no dummies | ![]() |
![]() |
with dummies | ![]() |
![]() |