Meetings - philippeller/freeDOM GitHub Wiki

20. May 2020

  • Aaron implemented a GA+simulated annealing optimizer, which requests 250 llh evaluations in batch mode. using that he can reco an event in ~500-700 ms. Resolutions look already quite reasonable (except for ~10% of events where the minimization failed, those could be outside of where our NN is valid, needs to be investigated).
  • Time has some outliers, and especially cascade energy has a larger tail on the positive side (fitted energy too high).
  • it's plausible these limitations are from the likelihood itself. Charge net being the suspect
  • ways to improve chargenet:
    1. train a per-DOM chargenet to assess how good we can get
    2. train a per-string or per-z-layer chargnet
    3. include more training data
    4. include additional info like number of hits
    5. get rid of charge net and include charge in hitnet and add some complicated/undefined things to make it work
  • Jan did studies with events that have the same MCTree (i.e. same particles from the interaction), but separate photon propagation = different hit patterns in the detector. 1-d scans show reasonable behaviour. Could also use those to assess validity of estimated reco uncertainties.
  • ...

07. May 2020

Initial Meeting, participants: Doug, Aaron, Justin (PSU), Jan (JGU), Martin, Philipp (TUM)

Ideas and suggestions that were discussed:

  • use raw waveforms instead of pulses. Was decided to first go with pulses as it is a known entity, but keep waveforms in mind. One comment that was raised: ADC digitizer bins are no longer independent/uncorrelated
  • Resuse parts from the retro framework where it makes sense, or think about merging the two
  • Directly sampling the posterior (MCMC, HMC, Multinest, ...)
  • First step: use SRTTW pulses and CRS minimizer and DeepCore events, compare to known recos. We may not expect too much of an improvement in resolutions, but in speed
  • Should be rel. straight forward to then use the same technique applied to ICU
  • Validity range of parameters should be assessed and corresponding constraints used in the minimization process. Could restrict ourselves to a smaller parameter space for the beginning to keep things simple

    What I meant here was to intentionally shrink the parameter space so that we could study the impact of exceeding the validity range, using a more easily understood system. Looking instead at (say) events at very low energy, or with a vertex outside the detector's physical edge, will make it harder to disentangle the impact of the validity range from other factors. –Doug

  • PSU has and will likely get even better Deep Learning optimized compute resources: accounts could be requested if needed by Doug

Tasks:

We decided it would be a good idea to separate the likelihood part (NN) from the exploration part (minimizer/sampler)

  • PSU wants to take care of the latter, i.e. provide a general reco framework that takes any likelihood function as input
  • Jan will use his experience of validating tables-based lieklihoods to validate the NN likelihood. For example compare to repeated simulation of a given event
  • Philipp to keep working on the NN training, i.e. providing the likelihood function

Next meeting: Week of May 18, need to find suitable time