Summary: weekly mtg 20160316 (Byron, Steve, me) - mobeets/nullSpaceControl GitHub Wiki
Discussed
- We discuss the changes in CCA between YR and YN over time. Seems to always ramp up? Hard to interpret, would probably take multiple weeks to unpack. Will table as a potential future direct.
- I show them the view of ||YR|| and ||YN|| over time, and then ||YR||/(||YR|| + ||YN||). Steve says he likes this view.
-
I show them how I used the behavioral saturating thresholds to pick my sessions.
-
Finally, I show them how we can score hypothesis performance over time, since the hypotheses predict activity independently over time.
Sometimes the cloud/habitual fits get better along with performance:
And sometimes they get worse:
Steve interprets this as fatal news: Habitual/Cloud, in two out of four sessions, gets worse with time. He doesn't seem to buy my "poor man's version" of a rotated hypothesis, as even the 20131125 rotated version seems to get worse right at the end:
Note how initially, no rotation is the best model, but with time, the -45 model becomes the best.
But I count this method of looking at hypothesis scores over time as a WIN! We now have a clear metric for a good hypothesis: The hypothesis should get lower errors over time if the monkey is transitioning to the new hypothesis, or stay the same if he is using the same hypothesis throughout the block.
-
Instead of this poor man's stuff, we need to use IME...
-
Steve notes lessons from Xiao's experiment:
- initially: re-aiming
- then: slower timescale change (null space change?) not picked up by behavioral change = re-mapping
TO DO
check null activity per 100 trial bins (compare using actuals across whole block vs. actuals within current trial bin block)
simulate true hypothesis and see if errors truly go down over time
ways it should go down:
- changing model, same hypothesis
- changing hypothesis
SIMULATIONS
- keep true row activity as close as possible
- sample this as mean from normal, with private variance of factor analysis
- then go to spikes using factor Lz + mu + private noise
- feed that into hypothesis assessment
- make sure experiment is loading up latents from spikes
- just do this for perturbation
FIT HYPOTHESIS using B2 as baseline, B1 as what you're fitting
- if same hypothesis does best going in both directions, that's good...
IME
- fit to the intuitive block--should outperform true decoder