Summary: catch up on 20160122 (Matt) - mobeets/nullSpaceControl GitHub Wiki
Factor analysis
So I've got this L
matrix from factor analysis, and I want to find the factor loadings that explain the most variance. I'd read online that you can sum up the square of each of the factors (the columns of L
) or something, but I think that's just when the loadings are orthogonal, which with my L
, they are not.
My first question: How can I orthogonalize L
? Matt's answer is to check Byron's GPFA paper (pdf); turns out there's a unique way to orthogonalize them, and using Byron's method they will then also be ordered by the variance they explain! So I just need to check out the methods in that paper.
Catch-up on last week's progress (since he wasn't at the weekly meeting)
Next I walked him through what I did last week, only he had many helpful interjections that made it clear to me that 1) I need better ways of explaining how the hypotheses work that aren't just how I implement them, programmatically; 2) that Matt luckily understands the programmatic approach so today, at least, this was okay; and finally, 3) he had some great approaches to problem-solving and understanding the data and hypotheses.
Anyway, after seeing how well aligned the activity from the intuitive block in the null space of the perturbed mapping is with the activity from the perturbed block in the perturbed null space, he got very excited, and immediately wanted to know how we could make a hypothesis that basically just predicted that.
So we went to the whiteboard, and after much discussion and notation choices, Matt realized that this should be exactly what the habitual hypothesis does.
Previously, from the weekly meeting, our approach was that the volitional model was closest, and so my plan this week was to move forward on that one. I knew empirically from my code that, for whatever reason, the habitual hypothesis predictions appeared to have no significant dependence on kinematics. But Matt convinced me, and I went into the code only to find a pretty significant bug, and so, ALAS! The habitual hypothesis predicts exactly the intuitive block's activity projected into the perturbed mapping.
Conclusion
We're in some sense back to where Pete left things off: The habitual hypothesis has the lowest error (actually lower than I'd thought it was before, now that I've fixed my tiny one-line bug). But unlike what Pete found, in my visualization the habitual hypothesis even has a qualitative match to the actual observed null activity. With a few exceptions, of course, but overall it looks great, where with his stick plots it did not look so great.
Compare the agreement between habitual and observed in Pete's view vs. my view, for example:
So visualization here is key to believing that the habitual here is not so bad.
Moving on, I'd still like to try out the volitional hypothesis some more, but I think before I do that I'm going to look into whether by filtering the data in a smarter way, we might get the habitual to be even better.