Summary: weekly mtg 20160629 (me) - mobeets/nullSpaceControl GitHub Wiki

Prep

Overview:

  1. Pruning: it's not about similar sensory conditions
  2. Why habitual fails: null and row space means move together
  3. Properties of cloud samples
  4. The best sampling hypothesis
  5. Fitting hypotheses in reverse (from perturbation to intuitive)
  6. Question: What good is pruning?

Pruning: it's not about similar sensory conditions

Issues: #202

inverse-pruning is like cloud only you can't sample any points in the same thetaGrp (i also tried excluding pts where thetas is within 45 deg of current theta). turns out this hypothesis does as good or better than normal pruning. and in fact, inverse-pruning's mean errors by kinematics resemble cloud more closely than they do pruning. this suggests the reason cloud is good has nothing to do with sensory conditions (thetas). moreover, it suggests that pruning is not an improvement over habitual due to it "taking the best of habitual and cloud". avg

Why habitual fails: null and row space means move together

Issues: #199, #200

take the norm of the distance of the means for each group between Blk1 and Blk2. so for theta=0, take the norm of (mean row space activity in blk2) - (mean row space activity in blk1). that's in blue. habitual mean errors are in red.

this corresponds roughly to the mean error for the habitual hypothesis.

note that this is changes in mean ROW SPACE activity predicting changes in mean NULL SPACE

screen shot 2016-06-28 at 10 03 31 am

screen shot 2016-06-21 at 5 25 30 pm

Properties of cloud samples: not too close

Issues: #201

cloud takes the closest pts in row space, but sometimes the closest pt is further on some timepoints than others.

turns out that the further that pt is, the better it does at predicting the mean? below, performance of cloud (orange) compared to hyp picking closest in null space (blue), compared to cloud using pts whose distance is in various percentiles of the total distances.

so for each cloud prediction, there's some distance that that pt had in row space from the current pt we're trying to predict. take this distribution and only keep the 1-33rd percentile, 33-66th percentile, etc.

v7

The best sampling hypothesis

Issues: #197, #203

so for each pt in perturbation, i sample the nearest pt in intuitive in terms of a) null space [red] and b) row space [blue; i.e., "cloud" hypothesis].

the mean errors for these two methods are below. note that the red is thus the best sampling hypothesis possible among hypotheses that don't alter null space activity (i.e., unconstrained, habitual, pruning, cloud, and mean shift, but not volitional)

v4

can add in a few others as well:

best-sample: pick closest intuitive pt in terms of null space best-habitual: same as best-sample, but only consider pts within 30deg of thetas best-habitual-inv: same as best-sample, but only consider pts NOT within 45deg of thetas; kinda like best cloud

note: the "s" at the end of "cloud-1s" and "pruning-1s" just means i used a (quicker) simpler implementation of the normal hyps.

all

Fitting hypotheses in reverse (from perturbation to intuitive)

Issues: #208

They're basically the same!

Using perturbation to predict intuitive:

avg

Normal direction:

means

Question: What good is pruning?

Issues: #195

note how pruning basically blends cloud and habitual in terms of errors, with pruning usually looking mostly like cloud. (this is clearest for 20120601.) so pruning might not be adding much actually. in fact, it never beats cloud, so do we even need to consider it any more?

screen shot 2016-06-21 at 3 43 24 pm