Summary: weekly meeting 20160217 (Byron, Steve, Matt, me) - mobeets/nullSpaceControl GitHub Wiki
Null activity changes over time
If you just view the perturbation null space activity over time (even starting later in trial count to allow for learning to supposedly be complete), it's clear that things are changing over time:
Maybe null activity is somehow yolked to row activity, or to learning in general? It must be, actually, because how could monkey choose null activity if he doesn't even know the null space?
New visualizations
- compare changes in row space to changes in null space
- compare to behavior over time (cursor progress, percent correct, etc.)
- y-axis: activity in one column
- x-axis: time
- and also compare against each other, how much they move from early to late
New view of hypotheses
The hypotheses make predictions about what activity should be moving towards. So find these points, and look for evidence of activity converging towards them.
Two main problems
- how to handle error: do we care that monkey's actual movement didn't match his goal?
we need to believe his null activity is what he intended to do
- internal model estimation (use trials where internal model's row space has converged, as opposed to filtering out large angular error)
- filter out large errors
- activity changing over time
- filter out
- make predictions to move towards
The end goal
My question: What are the other figures of this paper? How do we know when we're done? How do we know a hypothesis is "good enough"?
- believe a hypothesis does the best
New hypothesis: "recent" habitual
We need a hypothesis that makes predictions based on the knowledge that null space activity is yolked to the total activity when the monkey is exploring in order to learn the new mapping.
So, say your internal model has converged. Then what was the null space activity right before that? Maybe that just persists.
Q: Is this a fair hypothesis? Byron seems to say no.
New hypothesis: Network constraint
- for each t, take row space, look for similar row space in intuitive mapping to draw activity z, then project that into current null space
- this should depend on learning, so using the internal model will determine where he selects
Q: is this an unfair advantage in that it gets the true row space? (other hypotheses don't) Q: this hypothesis doesn't use the cursor-to-target goal to make predictions--is it fair to present this hypothesis's predictions with respect to that?
To do: Make a cartoon of all hypotheses in this view [see photo]