Summary: weekly mtg 20160210 (Byron, Steve, Matt, me) - mobeets/nullSpaceControl GitHub Wiki
Filtering out large angular errors
After catching Matt up with the top-two-factor-based volitional hypothesis doing much better than the original volitional, I explain that I realized the filtering out of timepoints with high angular errors actually hurts our hypothesis, specifically the volitional one. This provokes a pretty long discussion about whether or not it makes sense to remove large angular errors, given the assumptions.
From my perspective, it doesn't matter if there are errors: You're assuming that the monkey has some goal, which is the angle of the cursor to the target, but if he deviates from this it doesn't matter so much. But Steve's point is that if there were no learning at all going on, i.e., the monkey couldn't see the cursor, then you'd expect the habitual to be the right hypothesis.
So it seems like we can't escape some sort of quantification of learning, per kinematics. Steve mentions that, say, if the habitual did better even when learning definitely took place, that that would be a stronger argument. Even so, who's to say that the null activity we observe post-learning is actually the null activity the monkey wants to generate? Maybe he just corrects his row space first, and then once that's done, he'll move (at a slower time-scale) towards eliminating unnecessary null space activity carried over from the intuitive mapping.
So Steve and Byron suggest that we start looking at activity over time, including the trials immediately after the perturbation happens. I'd done this to some extent just looking at the norms, but Steve suggests just plotting the first two PCs. Byron suggests using DataHigh.
Different hypotheses would predict different things over time (in this new view of dynamics of null space over time):
- habitual says nothing changes
- volitional says...?
- baseline says null activity moves towards zero
- minimum says null activity moves towards some non-zero value
Mixture of volitional and habitual
I told them how the volitional does better than habitual now (without angular error filtering, and using the top two factors as the correction mapping), but only in terms of mean. The variance is huge and awful! But if I just scale down the volitional correction--e.g., use the correct direction of the correction, but take a shorter solution--then it does the best: low mean error, reasonable variance error (closer to habitual).
The scaling (e.g., divide by 5) brings the volitional's solutions within the range of reasonable activity, but now it's not going to produce the observed cursor movement. So the explanation is that the monkey corrects in his volitional space part of the way (volitional), and then moves the rest of the way in the true mapping's row space (habitual).
Steve is opposed because it doesn't have an easy interpretation. As Byron puts it, if you can move in the correct row space, why doesn't he move all the way in that space?
My claim/belief is that there's probably a way of viewing this mixture of mappings as some other mapping. Perhaps.
Byron suggests that we add a maximum firing constraint with a 3-FA volitional, instead of the minimum norm solution. (But won't this yield solutions just on this boundary? We'll see.)
In my view, the only way to correct the variance of the volitional's solutions is to give it axes that are more aligned with the true mapping. The factor-based row space is obviously a large enough angle from the true row space that in order to match the observed kinematics you've got to move into unreasonable regions along that axis.
Covariance errors
I mention the new way of assessing errors in covariance by describing how much variance of one dataset is explained by the eigenvectors of the other. Byron immediately agrees, mentioning how the covariance ratio (ratio of determinants) would say two covariances with different orientations are identical. So now we have a way of assessing goodness of shape- and orientation-similarity of the covariances we're predicting.