Talk: feedback from Scabby on 2016 08 26 - mobeets/nullSpaceControl GitHub Wiki
To do:
- How would you show certain results are statistically significant? (e.g., the unity-line plots comparing cloud and hab)
- Uncontrolled should be uniform sampling in spike space, not in factor space
- In the 6x6 histogram view, how would this look if you just showed the intuitive data in this space? Pre-learning pert data?
- Why does minimal firing hyp show tuning?
Slides to do:
- Slow down on the point that we're finding cursor velocity angle on each timestep
- Introduce output-null histogram with "activity level" or "firing rate" or something as xlabel--make sure people understand this slide before showing full 6x6 grid of histograms
- Ticks, labels, and units on all axes
- Minimal intervention: maybe show the new blob
- More detail on covariance error for Rob in PNC talk
Suggestions:
- Only present learning hyps for PNC talk
- Or combine all 5 hyps in together, with simpler hypotheses somewhat brushed over
- Animate transitions in learning hyps to show how the cloud changes
Good questions:
- Uncontrolled: Why uniform? Why not Gaussian?
- Why use IME if results don't change?
- How does error in hypotheses vary as trials go on?
- How similar are the two mappings (e.g., in principle angle), both with IME and without
- What's the difference here between support and distribution when you're talking about predicting output-null activity?
- Pick one target and view its results--do they look similar? Pick the hardest target and easiest target--how do they compare?
- How does IME error fall-off with trials compare to performance measures? (i.e., do they look similar?)
Future directions:
- Trial dynamics: Would output-null stay constant within a trial?