Summary: weekly meeting 20160203 (Byron, Steve, me) - mobeets/nullSpaceControl GitHub Wiki

Minimum cv-score is basically 0.

Steve says this will be the SE. Think about this.

Covariance score disagreement between Pete's write up and his code (and my code, which agrees with his code).

Byron and Steve seem concerned. I haven't looked into it at all yet but from what I've seen of the distributions, they shouldn't be having det(covariance) ratios of 10^10. at least, the variances (true vs. predicted) don't seem that far off from one another. Byron reminds me that the determinant is the product of the eigenvalues. Maybe the observed is blowing up? Anyway, the ordering changes from session to session: Sometimes baseline, e.g., has covRatio of 10^10, sometimes it's 10^-10.

Orthonormalized factors

Byron repeats that the orthonormalized factors are in the same units as the original data. Because if L = USV', then u = (USV')z, and so setting L'=U, z'=SV'z, then u = U(SV'z) now has the latents as z'=SV'z which has the same units as u.

Anyway, multiplying the S gives different results than before orthonormalizing. By how much do the hypotheses change?

Nonidentifiability of FA

Byron says that any unitary transformation V (V'V = I) is equivalent in FA. I.e., Lz = LVzV'.

Does this change any of our results, for an arbitrary unitary V?

New volitional hypotheses

I show them the new volitional hypotheses, and explain that sometimes the new volitional (with row space = first two latent factors) is better than the original volitional (row space = row space of intuitive mapping).

Hypotheses across days/monkeys

I show Byron and Steve that the habitual is always either the best or tied for the best.

Why does the habitual fail?

I show them my analyses with the norms of various activity for each kinematics condition, and how they seem to shift around from intuitive to shuffled somewhat arbitrarily.

Steve proposes that we try to compare habitual performance to behavioral metrics: performance hit (i.e., performance drop right after switching to perturbed mapping), and performance learned (i.e., performance improvement from beginning to end of perturbed), for each kinematics condition.

Byron gets excited about this idea and says we can do this across data sets, per kinematics condition.