Summary: weekly meeting 20160120 (Byron, Steve, me) - mobeets/nullSpaceControl GitHub Wiki

Created issues: #17 #18 #19 #20 #21 #23 #24

Notation: B2[NB1] = Activity from perturbed mapping (B2 = Block 2) in B1's null space.

What we talked about:

  • I note that even the dumbest model, always predicting null space activity is 0, does as good or better than our best current hypotheses. This is a good benchmark for performance. (Below, the dummy fit is model 2.)

errofmeans 2 is dummy

  • I show them my new visualization (instead of the sticks Pete used). Roughly, what I tried to get across was that, for observed data:

a. B1[NB1] is close to 0, suggesting the baseline hypothesis.

b1 nb1

b. B2[NB2] is clearly not at all close to 0, and shows kinematics dependence as well, suggesting different strategy for choosing activity in B1 than for B2. [Recall that monkey has worse performance under B2 (shuffled mapping) than for B1 (intuitive mapping).]

b2 nb2

c. B2[NB2], red, looks a lot like B1[NB2], blue, suggesting the monkey uses unnecessary/accidental null activity for B2 carried over from B1. This is something like a volitional hypothesis, with the exception of a few columns. Steve mentions that a "weak" volitional hypothesis might work here. (Our current volitional assumes that your volitional space is Row(B1), but it might be slightly bigger, i.e., 10x3 instead of 10x2.)

b1 blue -b2 red - nb2

Below, the volitional hypothesis (blue) vs observed (red) for B2[NB2]:

b2 red nb2 -volitional blue

  • Byron asks to compare early trials (red) vs. late trials (blue) for B2[NB2]. At rough glance, it looks like there is some movement towards 0 for later trials. I.e., he starts out using old mapping and roughly moves towards more efficient (z_null=0) activity.

b2 nb2 volitional early red vs late blue

  • We explore ideas for alternative volitional hypotheses. (#17)

Outstanding questions (on my end)

  1. Using multiple maps: Do we have to? Steve says Yes, but I need to think this through some more. I would like to use cross-validation in some way if possible.

  2. Scoring (#19): Steve mentions that we are seeking a "distributional" description of activity in null space. This is why our current metrics are the first two moments (mean, variance). We discuss the fact that we are predicting B2 time point by time point, where for each prediction we are sampling randomly from B1 (rather than, say, sampling the mean from B1). This is because we want to also explain the variance in addition to the mean: if we sampled the mean at every time point there would be much less variance (open question?).