Summary: weekly mtg 20170503 (Byron, Steve, Aaron, me) - mobeets/nullSpaceControl GitHub Wiki
To do:
- cite Matt's Cosyne abstract
- mention that this (my version) is a stronger finding than that one in this particular way...
- send to Aaron post-discussion
- are we close to the word limit?
- scaling factor activity by number of neurons
- take length where FA loading vector would intersect hypercube as normalizer. eg [0 1] has length 1 but diagonal has length sqrt2
Discussion:
- Summary:
- no need to revisit motivation. just an executive summary
- just revisit what we found, and why it's important
- Main finding: Neural redundancy is solved differently than muscle redundancy.
- no need to revisit motivation. just an executive summary
- why do we study redundancy in the first place? what's the contribution?
- A: these findings gave rise to cost functions, which gave rise to better models of control theory
- these cost functions have led to a deeper understanding of how muscles are recruited when making movements (i.e., Byron says don't focus on just making better models, say it leads to us gaining a better understanding)
- maybe an understanding of the neural redundancy problem will lead to similar models of how neural activity is modulated to lead to better control
- A: these findings gave rise to cost functions, which gave rise to better models of control theory
- why it seems like a book report
- Byron says it's when the flow seems organized around the references
- instead, you want to have each paragraph make a point
- a more nuanced point
- or a specific comparison to other work
- overall, use the same meat i have now, just reorganize it to be around the big ideas, one main point per paragraph, and trim out any tiny points
High-level major points
- Summary
- no need to revisit motivation. just an executive summary
- just revisit what we found, and why it's important
- Main finding: Neural redundancy is solved differently than muscle redundancy.
- no need to revisit motivation. just an executive summary
- Comparing solutions to neural and muscle redundancy problem
- go deep
- e.g., why would it be different for neurons than for muscles? like minimum energy.
- see: that point where Steve says "this is an appropriate level of detail"
- Estimating output-null dimensions?
- Byron says maybe put this in Methods. This might over-emphasize this as a potential problem.
- But, it is important to point out how critical it is to know what the null space dimensions are
- How generalizable are these results?
- e.g., longer timescale
- effect of practice (say, with the same WMP, day after day)
- this is one place we can talk about learning, but we should still emphasize it's that learning emphasizes the null space
- e.g., other brain areas
- mention we don't consider dynamics. would considering dynamics help?
- okay to leave this out if it doesn't fit somewhere
- is this just a local area effect vs. inter-area?
- Are there advantages of neural redundancy? (CONCLUSION)
- entropic constraint?
- maybe activity on manifold is just more probable (i.e., more neural states representing that activity), and activity tends to "relax" to the more probable state
- discuss other ideas of why it might be useful...
- e.g., seems weird that you would use all of the redundancy; let's talk about that
- maybe you have to use the whole repertoire in order to preserve that network state, so you have to use the whole null space to visit all the states
- maybe cite Litwin-Kumar & Doiron (Nature communications maybe? look for "maintenance")
- entropic constraint?
WASHOUT STUFF
- with the WMP strategy, what would happen through the intuitive mapping?
- Steve wants some simulations for this
- say, conditioned on the same cursor-target situation
- let's assume he's got two static internal models: the intuitive mapping, the perturbation mapping
- you've got the intuitive neural activity conditioned on this
- and you've got the perturbation neural activity conditioned on this
- during washout, which of those is he selecting from? can we tell these apart? does he have an a-ha moment and just switch immediately?
- in 20160722, the left target has 3 in one path, 2 in another
- were they adjacent? which came first?
suggested analysis:
- for each timepoint in the first few washout trials, reach back in time to your most recent visual feedback, and propagate two internal models forward:
- the IME from the intuitive session
- the IME from the last bit of the perturbation session
- now, compare the angular errors under these two models
- see photo from whiteboard