Prep: weekly mtg 20160210 (Byron, Steve, Matt, me) - mobeets/nullSpaceControl GitHub Wiki
Small stuff
- for arbitrary unitary matrix,
V
:(V^TV = I)
,L' = (LV^T)(zV)
. Do the errors/norms/results change? No.
Bigger stuff
Comparing covariances
-
First off, I've gone back and checked Pete's old code just to make sure, and yes, the covRatios I'm getting are the same ones he was getting, and these do not match what was in the write-up: Basically, everything but habitual hypothesis has crazy covRatios.
-
The problem does appear to be super tiny eigenvalues (
det(cov(C)) = prod(eigs)
) making the determinant blow up -
I've found a new way of comparing covariance matrices.
a. implementation: sum((v11 - v21).^2 + (v12 - v22).^2)
, where v21 = var(D2*U1)
(if covariance matrices are the same, then a given eigenvector should explain the same amount of variance in both samples.)
b. intuitive: percent of variance explained in one data set using the other's eigenvectors.
c. consistent with mean error metric: 0 means perfect match; also, "zero" model is baseline model (doing worse than "zero" means you might as well predict no covariance)
So above, we see that the habitual hypothesis predicts the orientation of the covariance very well. In shape, not so much, but still better than the zero model.
Results of filtering data
The perturbation trials have a much broader range of angular error. So filtering both blocks at 20deg angular error might affect results--especially for hypotheses that predict similar null activity as in initial block.
And in turns out that removing the angular error filtering improves the volitional drastically! Making it suddenly very competitive with the habitual hypothesis:
Here's an image of how volitional improves:
You might think this is just because we're now including early trials, where the monkey might still be relying on the intuitive block's mapping. But in fact, these angular errors happen consistently throughout the perturbation block.
The other two filters we'd been using are on the radius of the cursor relative to start position and target position. Here are the results of toggling these on and off, and fitting, across all four sessions:
note that volitional is always best with anyAngErr + noMin, and in 3 out of 4 cases this makes it better than the habitual hypothesis (in terms of error -- probably not in terms of cov).
Here's the covariance scores, with no angErr filtering (note that volitional is bad here):
The covariance scores don't appear to change on filtering.
Behavioral performance vs. hypothesis performance, for each kinematic condition across sessions
Nothing I've looked at so far shows any correlation with how well a hypothesis fits.
And then sometimes it looks like there's a match...
But it doesn't hold for other days.
Scaling down the volitional hypothesis makes it way better
(I'm talking about the volitional using the first 2 FAs, by the way.)
If I take the volitional component and divide it by, say, 5, then the volitional's variance gets way better, closer to the habitual's.
In terms of mean error:
- If the volitional was already better than the habitual, this scale-down will make it slightly worse, but still better than the volitional.
- If the volitional was worse than the habitual, the scale-down will make it better than both.
In terms of variance, the variance will always improve drastically over the volitional, and sometimes even get better than the habitual.