Summary: weekly mtg 20161102 (Byron, Steve, Matt, me) - mobeets/nullSpaceControl GitHub Wiki

check # of trials per target on each session

explain why we're ignoring sessions with no learning, when we included all of them in the nature paper

  • e.g. we want times when the monkey learned, and his behavior is stable

if amount of learning (say, angle between intuitive mapping and IME for perturbation) predicts less overlap between output-null distributions, does cloud still do well?

  • maybe do anti-pruning where you ignore all points sampled under similar contexts, to show that cloud is not just good because it's habitual
  • or like a set-difference of distributions predicted by pruning and cloud