Summary: weekly meeting 20160706 (Byron, Matt, me) - mobeets/nullSpaceControl GitHub Wiki
get figures down detailed outline of various sections
methods before or after results:
- nature neuro methods is after
- neuron -- ?
any time you have a "we found" that's results, not discussion
- and add it in supplemental figures
target audience: anyone at cosyne, or experimentally-oriented neuroscience
- want high-level idea--easy to skim!
- should easily be able to find a big-picture question answered in the results
main result first: e.g. figure 3
- also that one figure with all results (x-axis is session, y-axis is mean error)
- but connecting dots with a line doesn't make sense...find a right way to do it then go into high-level explanation (rather than zooming in)
- e.g., unconstrained cartoon, put the blue line lower and then you'd predict null space activity outside the "cloud"
- but maybe find a way to show real data
supplemental only if you can't work it into the main story
need to provide criteria for using certain data sets, if i don't use all of them
don't have pruning in first round, or mean shift
- make them maybe figure 8
- otherwise they cloud the story since they aren't better
for dates:
- preface dates with "J" or "L" in names so it's clear which data set is which monkey
IME and task details are in methods
use of "null space": this is okay
- maybe in introduction introduce y = f(Bx) where B is short and fat, give explanation for how there are multiple x's that produce the same y [or it could just be y = f(x), and in our case f(x) = Bx]
- we don't know f or B in muscles, but with BCI we can define this relationship
factor analysis makes assumption that all activity outside of 10d space is private noise unrelated to the task
- need high-level motivation for factor analysis: if you look at one neuron, some variability is just due to it spiking; but FA will return the shared part, more likely to be involved in the task?
- n.b. FA does not return orthogonal axes
- check out HW toy problem with FA that illustrates this point
- could assess this in spike space easily enough, but estimates might be much noisier (e.g., estimating covariance in high-d with low numbers of points)
should i cover hypothesis performance as a function of learning?
- that's more in matt's domain...
maybe start with intuitive fitting intuitive
- this will knock out unconstrained, minimal energy
- habitual and cloud are very close but wait! maybe this is just because there's no learning. so now we introduce the perturbation...
- show that habitual does much worse now
- can maybe use IME-only in this case also motivation for using IME only in the perturbation case
- in intuitive, he's killing it
- in perturbation, he learns but never recovers--so we need to estimate his internal model