Identify code - jungheejung/life-encoding GitHub Wiki
🌱 Purpose of this page: categorize cara's code into 4 categories
- hyperalignment
- annotation
- ridge regression
- variance partition
👩🚀 acronyms
aa
: between-subject anatomical alignmentha
: between-subject hyperalignmentws
: within-subject
👏 Identifying and Sorting code
hyperalign
annotate
ridge
variance
-- in front of the code!
If you've walked through a code, please use the following keywords -- Deepanshi's To Do
- Dockerfile
- README.md
hyperalign
alignment_paired_t-test.pyridge
alphas.pyhyperalign
;ridge
ana.pyhyperalign
ana_correlation_analysis.pyhyperalign
anatomical_isc.pyhyperalign
comp_top_ten_percent.pyhyperalign
compare_model_fits.py- correlation_analysis.py
- cvu_rdm.pbs : gives location of rdm_isc[life/forward_encoding/rdm_isc.py]
- docker_r21d_dependencies.sh : installs dependencies for Res(2+1)D - OpenCV, ffmpeg, and Caffe2 based on FB VMZ installation guide in docker/terminal
annotate
extract_features.py : extracts semantic feature vectors from annotated movie i.e narration/music spectral, narration w2v, image labeling, saliency, motion-energyannotate
extract_global_motion.pyannotate
extract_semantic_category_w2v.py- famfaceangles
- for_sam.py
- forward.pbs : setting up job for life/forward_encoding/forward_map_corrs.py
hyperalign
forward_map_corrs.pyridge
get_alphas.py : getting alpha value for single run of a participant for leftout hemisphere for one particular model
Heejung's To Do
hyperalign
get_min_max.py: loads the anatomical-align/ hyper-align dataset and gets the min,max,5%,95% number. Q. what is this number? Hyperaligned beta coefficients?hyperalign
get_union_intersection.py: grabs the union of ws and anatomical alignment, grabs the intersection of ws and anatomical alignment. Q. the intersection/union of "vertices", correct?hyperalign
group_correlation_test.py: t-test between 1) average fisher z of ws and ha vs 2) average fisherz of ws and aahyperalign
group_smoothness_test.py: using AFNI/SUMA's SurfFWHM, calculates the smoothness in aa, ha, ws. This method is different from the group correlation test and thinking over it would be helpful.
- one of the files created from this script
smoothness_ana.png
is currently in/idata/DBIC/cara/models/niml
-
hyperalign
hyperalignment.py -
hyperalign
hyperalignment_cvu.pbs: wrapper script for submittingall_ridge.py
-
hyperalign
isc
isc.pbs: wrapper script for isc.py -
hyperalign
isc
isc.py:
- input:
sam_data_dir = '/idata/DBIC/snastase/life' data_dir = '/idata/DBIC/cara/life/ridge/models' mvpa_dir = '/idata/DBIC/cara/life/pymvpa/' hyperalignmed data: mvpa_dir, 'search_hyper_mappers_life_mask_nofsel_lh_leftout_{0}.hdf5'
- output:
- mv.niml.write(os.path.join(data_dir, '{0}_isc_run{1}_vsmean.lh.niml.dset'.format(model, run)), lh)
- isc per runs are stacked and saved as : /idata/DBIC/cara/life/ridge/models/isc mv.niml.write(os.path.join(data_dir, 'isc/{0}_isc_vsmean.lh.niml.dset'.format(model)), np.mean(lh_avg_stack, axis=0)[None,:])
annotate
**load_files.py **: NOTE - does not seem to be a complete script. load json w2v embeddings for each part (4 files), using json bc has vectors labeled with words
- input:
json.load(open('/Users/caravanuden/Desktop/life/forward_encoding/old_codes/Part{0}_Raw_Data.json'.format(i)))
- NOTSURE make_predictions.py
- input:
does not exist: npy_dir = '/dartfs-hpc/scratch/cara/w2v/w2v_features', but is there something equivalent? motion = np.load('/ihome/cara/global_motion/motion_downsampled_complete.npy')
- narrative_actions.csv
- narrative_model.py
- inputs:
related to this directory?
/idata/DBIC/cara/narrative_models
directory = os.path.join('/dartfs-hpc/scratch/cara/new_models/narrative', '{0}/run_{1}'.format(stimfile, fold_shifted), test_p, hemi) -> not quite the same, but/idata/DBIC/cara/new_models/narrative/niml
is this similar?
- output: np.save(os.path.join(directory, 'weights.npy'), wt)
annotate
narrative_nouns.csv: inputs utilized in what? SHOULD identify the code that calls this csv file in.variance
partition_variance.py- pca.py: incomplete. reads csv file: l = pd.read_csv(os.path.join('/dartfs-hpc/scratch/cara/w2v/semantic_categories/', f)) Perhaps related to semantic category pca?
- plot_point_spread.py: incomplete. 1 line of code that loads np.load('/ihome/fma/cara/point_spread_function_results.npz')
- rdm_isc.py: Probably incomplete? compare with isc.py.
- input:
sam_data_dir = '/idata/DBIC/snastase/life' suma_dir = '/idata/DBIC/snastase/life/SUMA' mappers = mv.h5load(join(mvpa_dir, 'search_hyper_mappers_life_mask_nofsel_{0}leftout{1}.hdf5'.format(hemi, run))) ds = mv.gifti_dataset(join(sam_data_dir, '{0}_task-life_acq-{1}vol_run-0{2}.{3}.tproject.gii'.format(participant, tr[run], run, hemi)))
- output:
mv.h5save('/idata/DBIC/cara/search_hyper_mappers_life_mask_nofsel_{0}{1}leftout{1}{2}.hdf5'.format(participant, hemi, left_out, sys.argv[1]), final)
Xiaochun's To Do
- ridge_regression.py
- save_for_suma.py
- save_hyper_data.py
- save_masked_niml.py
- save_niml.py
- save_nuisance.py
- search_ISC.py
- search_RDMs.py
- semantic_categories
- slh.py
- slh_combined.lh.niml.dset
- slh_correlation_analysis.py
- slh_corrs.lh.niml.dset
- stats.py: calculates the pair-wise correlation difference of the following models
- ['ws', 'aa'], ['ws', 'ha_common'], ['aa', 'ha_common'], ['aa', 'ha_testsubj'], ['ws', 'ha_testsubj'], 'ha_testsubj', 'ha_common'
- ts_corrs.py
- utils
- visual_model.py
- visuals.py
annotate
w2v.py
2. Semantic Features
Scripts
- word2vector
Data
- Behavior
- Taxonomy
- Scene
3. Regularized Regression
Scripts
- Behavior
- Taxonomy
- Scene
- Behavior & Taxonomy
- Behavior & Scene
- Taxonomy & Scene
- Behavior & Taxonomy & Scene
Data
- Behavior
- Taxonomy
- Scene
- Behavior & Taxonomy
- Behavior & Scene
- Taxonomy & Scene
- Behavior & Taxonomy & Scene
4. Variance Partition
Scripts
- Behavior
- Taxonomy
- Scene
- Behavior & Taxonomy
- Behavior & Scene
- Taxonomy & Scene
- Behavior & Taxonomy & Scene
Data
- Behavior
- Taxonomy
- Scene
- Behavior & Taxonomy
- Behavior & Scene
- Taxonomy & Scene
- Behavior & Taxonomy & Scene