Summary: weekly mtg 20160831 (Byron, Steve, Matt, me) - mobeets/nullSpaceControl GitHub Wiki
To discuss
Questions:
- Presentation order: just learning; all together; or 1-2 as we have it now
- Two generic mappings vs. one new, one more familiar
Quick questions:
- Uncontrolled: is uniform best? (and in spikes or in latents)
- Minimal firing showing tuning (but only after PCA)
- Output-null beyond dim 4 if it doesn't contribute to mean/cov errors
- Using IME if it doesn't change results
- Distribution vs. support
- Getting baseline firing data
- Using incorrect trials (I'm not, currently)
Discussed
For Rob:
- less motivation, but still make it clear
- more math
- keep same structure: basic hyps, results, new hyps, results
- learning hyps by around 21 mins
Uncontrolled: you've identified the bounds, but how does the distribution look within that? this is just a naive approach. some of the later hypotheses will look at a different distribution...
make sure to walk through chain of logic:
- neurons -> factors -> null space -> rotation
using IME:
- magnitude of errors is higher, within session
- this means predictions are better!
- fundamentally closer to what the monkey is doing
writing about IME:
- when writing results, describe it without IME
- results without IME in supplement
- "but you might be concerned about this..." and now talk about IME
- main figure results are then with IME
TO DO:
-
does looking at 6x6 grid of histograms WITHOUT ime still have the wow factor?
-
when presenting, maybe foreshadow the cloud as being "now here's where we're GOING to get," but don't discuss how. that way when you show minimal energy hyps, the audience won't care as much
-
y axes should be the same across tuning plots
-
add light shading (or error bars) in tuning plots to show shrinking variance in higher output-dims
-
minimum tuning: remove spike rate bounds and verify that minimum doesn't show tuning in this case
-
how many trials are ignored for incorrects? maybe re-run with them in just to verify
-
for paper: insert uncontrolled-sampling/empirical in with other learning hyps, so it's clear that we're not doing well with cloud/hab just because we're suddenly using real data
-
look for spontaneous activity data, contact emily after trying
-
habitual: motivated by learning/history, cloud: just about two mappings
- "but, turns out they both work good in both directions" <- probably more of a discussion point