fMRI process pipeline : HCP style image dataset HCP_ALL_Family - CuiLabCIBR/utils GitHub Wiki
Step 1 : Familiar with HCP style datasets
The Human Connectome Project (HCP) has tackled one of the great scientific challenges of the 21st century: mapping the human brain, aiming to connect its structure to function and behavior. The βHCP-styleβ neuroimaging approach has being applied to a new generation of studies. Our lab has downloaded the Lifespan Young Adult HCP dataset and the Lifespan Human Connectome Project Aging dataset.
* HCP Young Adult dataset
we have all data from HCP Young Adult dataset, which including sMRI, task-fMRI and dMRI. The data structure as follow:
Overall, we have 1080 subjects, but not all subjects complete for all modalities. The orignal file was compressed for each subject, so we need to unzip each file for further processing.
ls -d /HCP_orgin_data_dir/Language_Task_fMRI_Preprocessed/*_3T_tfMRI_LANGUAGE_preproc.zip > /HCP_ALL_family_process_dir/code/language_file_sub.txt
we can use follow command to unzip each subject(otherwise, you can use MATLAB or python or whatever you familair language to do it)
cd /HCP_ALL_family_process_dir/code/
sh unzip_fie.sh
the unzip_file.sh as follows:
mkdir /HCP_unzip_data_dir/Language_Task_fMRI_Preprocessed
for file in `cat language_file_sub.txt`
do
unzip -q -d /HCP_unzip_data_dir/Language_Task_fMRI_Preprocessed ${file}
echo $file
done
you can write a for loop to unzip all modality file for each subjects or excute above script for each modality
this unzip step will take a long time to do and you can use GNU Parallel to speed it or screen to make it excute silencely.
for example,you can typo like follows to do it:
cd /HCP_ALL_family_process_dir/code/
screen
sh unzip_fie.sh
and then you can first press "ctrl+A" and then press "D" to leave it.
After you unzip all modality of HCP_ALL_family dataset, you can give a through to the image store structure of one subject because of the standard orginazition of the HCP style datasets.
We use a example subject from tfMRI_LANGUAGE task to explain the store structure, and the rest fmri data was stored as same as task fmri. However, the rest fmri dataset did not have the EVs folder caused by no event were shown to subjects (obviously, there are no task).
the Results folder contains LR scans and RL scans fmri data, which have been processed by HCP minimal preprocess pipeline, if you did not familiar with HCP pipeline, please read follow artical:
HCP minimal preprocess pipeline
In summary, the file that we got have been removed spatial distortions, realigned volumes to compensate for subject motion, registered to the structural, normalized to a global mean, masked by the data with the final brain mask and projected into the CIFTI grayordinate standard space(91k space). If you don't familair with the CIFTI file format, please check follow website:
So, we need to do post-process to denoise the fMRI data. The post-process almost for resting fmri data, but some times, we need to regress the task information (which has been recorded as the time onset and duration stored in EVs folder) from task fmri data and regard it as resting state. We can talk about it in later.
Now, we can take a look into the LR scans and RL scans folder:
we can see the only difference between the RL and LR folder is the phase encode direction of fmri scans, if you are curious about the affects of phase ecode on fMRI data, pleas check follow links: How do you pick which anatomic direction to use for frequency- or phase-encoding? and this artical:Effect of phase-encoding direction on group analysis of resting-state functional magnetic resonance imaging
The most important files for fMRI data (both task and resting data)including:
- Movement_Regressors.txt,
- Movement_RelativeRMS.txt,
- tfMRI_LANGUAGE_RL.nii.gz,
- tfMRI_LANGUAGE_RL_Atlas_MSMAll.dtseries.nii,
- tfMRI_LANGUAGE_RL_Atlas.dtseries.nii
the Movement_AbsoluteRMS_mean.txt contains 12 variables. The first six variables(trans_x,y,z(mm) and rotation_x,y,z(deg) ) are the motion parameters estimates from a rigid-body transformation to the SBRef image acquired at the start of each fMRI scan. The second six variables are temporal derivatives of those motion parameters. If you did not famlilar with motion detection and correction for fMRI data, please read follow papers and more paper related to this topic (motion control is very important for fMRI research, you can search more paper which wrote by Power, Theodore D. Satterthwaite and so on):
Motion Artifact in Studies of FunctionalConnectivity: Characteristics and MitigationStrategies
the Movement_RelativeRMS.txt contains an estimate of frame-to-frame displacement for each run, which is RMS (root-mean squared) displacement during realignment using MCFLIRT, if you are familiar with Power FD, you will notice that that FD values are approximately twice as large as the βrelative RMSβ measure. For more sepcific information, please check above articles.
the tfMRI_LANGUAGE_RL.nii.gz is the minimal preprocessed volume fmri data and both the tfMRI_LANGUAGE_RL_Atlas.dtseries.nii and tfMRI_LANGUAGE_RL_Atlas_MSMAll.dtseries.nii are grayordinate space fmri data, the difference between them can be found in such paper and website:
MSM: A new flexible framework for Multimodal Surface Matching
MSM: a cortical surfaces registration tool
those five files are necessary for future process. If you interested in the other files, please read follow websiteοΌ
HCP 1200-subjects-data-release
Besides the fmri data, we also need to use the Structural MRI data for each subject. So, let's take a look on a structural MRI folder of example subject.
we should use the files in MNINonLinear folder, and the surf file including Thickness, curvature,SmoothedMyelinMap and other surface file from freesufer can be found in the MNINonLinear folder, and the fsaverage_LR32k folder contains all file were registered into fsLR_32k space. What we need is the wmparc.2.nii.gz image, which stored in ROIs folder and it contains the label of whiter mask and CSF mask. If you don't know why we need the WM mask and CSF mask, please check follow website:
we can use the wmparc.2.nii.gz file to extract the signal of Whiter matter and CSF for each subject , and save them in txt file for future process. I use fslmaths calculate the mask files of WM and CSF from wmparc.2.nii.gz based on labels from Dr. Eleftherios Garyfallidis and Dr. Matthew Glasser.
#wm mask
fslmaths wmparc.2.nii.gz -thr 3000 -uthr 4035 -bin wm_mask_tmp1.nii.gz
fslmaths wmparc.2.nii.gz -thr 5001 -uthr 5002-bin wm_mask_tmp2.nii.gz
fslmaths wm_mask_tmp1.nii.gz -add wm_mask_tmp2.nii.gz -bin wm_mask.nii.gz
#csf mask
fslmaths wmparc.2.nii.gz -thr 4 -uthr 5 -bin csf_mask_tmp1.nii.gz
fslmaths wmparc.2.nii.gz -thr 14 -uthr 15 -bin csf_mask_tmp2.nii.gz
fslmaths wmparc.2.nii.gz -thr 24 -uthr 24 -bin csf_mask_tmp3.nii.gz
fslmaths wmparc.2.nii.gz -thr 31 -uthr 31 -bin csf_mask_tmp4.nii.gz
fslmaths wmparc.2.nii.gz -thr 43 -uthr 44 -bin csf_mask_tmp5.nii.gz
fslmaths wmparc.2.nii.gz -thr 63 -uthr 63 -bin csf_mask_tmp6.nii.gz
fslmaths wmparc.2.nii.gz -thr 250 -uthr 255-bin csf_mask_tmp7.nii.gz
fslmaths csf_mask_tmp1.nii.gz -add csf_mask_tmp2.nii.gz \
-add csf_mask_tmp3.nii.gz \
-add csf_mask_tmp4.nii.gz \
-add csf_mask_tmp5.nii.gz \
-add csf_mask_tmp6.nii.gz \
-add csf_mask_tmp7.nii.gz \
csf_mask.nii.gz
the WM mask looks like below:
the CSF mask looks like below:
however, Due to the motion and partial volume effects, some BOLD signal from grey matter may leak into voxels labeled as WM or CSF. So we need to remove boundary voxels to be more confident that the region contains only noise information. This step was descibed in Mitigating head motion artifact in functional connectivity MRI, which is the step 16: Compute eroded WM and CSF masks.we use the XCP Engine to make it, you need to setup the environment for xcpEngine by source code. We can use follow scripts to calculate the eroded WM and CSF masks:
WM mask
#!/usr/bin/env bash
source ${XCPEDIR}/core/constants # you need to setup the xcp_engine tool
source ${XCPEDIR}/core/functions/library.sh
####################
minret=90 # masks be eroded to retain 5% of their original size, so as to minimize partial volume effects
image=/HCP_ALL_family_process/code/WM_csf_mask/wm_mask.nii.gz
out=/HCP_ALL_family_process/code/code/WM_csf_mask/
intermediate=/HCP_ALL_family_process/code/WM_csf_mask/
fslmaths ${image} -bin ${intermediate}-bin.nii.gz
image=${intermediate}-bin.nii.gz
exec_xcp layerLabels -l ${image} -i ${intermediate} -o ${intermediate}-onion.nii.gz
llim=$(arithmetic 200-${minret})
exec_fsl fslmaths ${intermediate}-onion.nii.gz -thr ${llim} -uthr 200 ${out}/final_WM_10_mask
the final_WM_10_mask looks like below:
CSF mask
#!/usr/bin/env bash
source ${XCPEDIR}/core/constants # you need to setup the xcp_engine tool
source ${XCPEDIR}/core/functions/library.sh
####################
minret=90
image=/HCP_ALL_family_process/code/WM_csf_mask/csf_mask.nii.gz
out=/HCP_ALL_family_process/code/WM_csf_mask/
intermediate=/HCP_ALL_family_process/code/WM_csf_mask/
fslmaths ${image} -bin ${intermediate}-bin.nii.gz
image=${intermediate}-bin.nii.gz
exec_xcp layerLabels -l ${image} -i ${intermediate} -o ${intermediate}-onion.nii.gz
llim=$(arithmetic 200-${minret})
exec_fsl fslmaths ${intermediate}-onion.nii.gz -thr ${llim} -uthr 200 ${out}/final_csf_10_mask
the final_csf_10_mask looks like below:
Now, we have a sample understanding about HCP-style image and the structure of the storage of HCP datasets. Next, we will move on what is BIDS protocal and how to prepare files for post-denoise.
Step2: Convert the HCP style dataset into the BIDS structure
Considering for the standard process of fMRI data, we use the fMRIPrep and XCP-ABCD tools to process the fMRI data of HCP. However, we need convert the HCP style dataset into the BIDS structure becasue of the input restriction of XCP-ABCD. That is the reason why we look through the HCP style datasets firstly. The BIDS standard structure aims to organise and describe neuroimaging data in a uniform way to simplify data sharing through the scientific community. Let's look at the BIDS structure first and next to convert the HCP Young Adult dataset into BIDS structure.If you want to learn BIDS, please check this website bids-standard Tutorials. Here, we just introduce some basic rules in BIDS and give some useful example.
OverallοΌ these are the three main types of files you'll find in a BIDS dataset:
.json
files that containkey: value
metadata.tsv
files that contain tables of metadata- Raw data files (e.g.,
.jpg
files for images or.nii.gz
files for fMRI data.)
These three types of files are organized into a hierarchy of folders that have specific naming conventions, that would be explianed in follow text:
There are four main levels of the folder hierarchy, these are:
project/
βββ subject
βββ session
βββ datatype
With the exception of the top-level project
folder, all sub-folders have a specific structure to their name (described below). Here's an example of how this hierarchy looks:
myProject/
βββ sub-01
βββ ses-01
βββ anat
however, the **session ** folder is not indispensable to BIDS. We can make it more simple, like follows:
HCP_ALL_family/
βββ sub-100206
βββ func
βββ anat
subject Structure:
sub-<participant label>
One folder per subject in this dataset. Labels should be unique for each subject.
the BIDS sets different types of data. Must be one of:
- 'func', 'dwi', 'fmap', 'anat', 'meg', 'eeg', 'ieeg', 'beh', 'pet'
The name for the datatype depends on the recording modality. We just use the modality inculding func and anat .
Except the defination of the folder hierarchy, the name of each file should be named by organized. Here is an example:
βββ anat
β βββ sub-100206_from-MNI152NLin2009cAsym_to-T1w_mode-image_xfm.h5
β βββ sub-100206_from-T1w_to-MNI152NLin2009cAsym_mode-image_xfm.h5
βββ func
βββ sub-100206_task-REST1_acq-LR_desc-confounds_timeseries.json
βββ sub-100206_task-REST1_acq-LR_desc-confounds_timeseries.tsv
βββ sub-100206_task-REST1_acq-LR_space-fsLR_den-91k_bold.dtseries.json
βββ sub-100206_task-REST1_acq-LR_space-fsLR_den-91k_bold.dtseries.nii
βββ sub-100206_task-REST1_acq-LR_space-fsLR_den-91k_bold.json
βββ sub-100206_task-REST1_acq-LR_space-MNI152NLin2009cAsym_boldref.nii.gz
βββ sub-100206_task-REST1_acq-LR_space-MNI152NLin2009cAsym_desc-brain_mask.nii.gz
βββ sub-100206_task-REST1_acq-LR_space-MNI152NLin2009cAsym_desc-preproc_bold.json
βββ sub-100206_task-REST1_acq-LR_space-MNI152NLin2009cAsym_desc-preproc_bold.nii.gz
βββ sub-100206_task-REST1_acq-RL_desc-confounds_timeseries.json
βββ sub-100206_task-REST1_acq-RL_desc-confounds_timeseries.tsv
βββ sub-100206_task-REST1_acq-RL_space-fsLR_den-91k_bold.dtseries.json
βββ sub-100206_task-REST1_acq-RL_space-fsLR_den-91k_bold.dtseries.nii
βββ sub-100206_task-REST1_acq-RL_space-fsLR_den-91k_bold.json
βββ sub-100206_task-REST1_acq-RL_space-MNI152NLin2009cAsym_boldref.nii.gz
βββ sub-100206_task-REST1_acq-RL_space-MNI152NLin2009cAsym_desc-brain_mask.nii.gz
βββ sub-100206_task-REST1_acq-RL_space-MNI152NLin2009cAsym_desc-preproc_bold.json
βββ sub-100206_task-REST1_acq-RL_space-MNI152NLin2009cAsym_desc-preproc_bold.nii.gz
we need to noticed that the file name of func data should contains follow properties:
subjects: sub-*
task: task-*
acquire(the phase encode direction): acq-*
space: space-*
status: desc-preproc
modality: bold.nii.gz
those properties were concatenate by "_"
the anat file name should contains follow properties:
subjects: sub-*
transinfo: from-MNI152NLin2009cAsym_to-T1w
mode: image
file type: xfm.h5
so ,we can see the structure of BIDS is quite different from the HCP dataset. We need to write a script to convert HCP dataset into BIDS. However, if we want use the XCP-ABCD to process the fMRI data directly, we will need to do some preparatory works but not just convert exist HCP dataset into BIDS. Besides the BIDS structure, we should have confounds file for regression step in XCP-ABCD. However, HCP only provide motion file, we need to calculate cerebrospinal fluid (CSF) ,white matter (WM) and Global signals from bold file for each subject. The thing has been mentioned in Step 1. Here, I use the wmparc.2.nii.gz file to extract the signal of Whiter matter , Global and CSF for each subject.
clear;clc;
% set the root path
root_path = '/HCP_data_dir/Resting_State_fMR1_2_Preprocessed';
code_path = '/HCP_data_dir/HCP_data/code/';
% set mask
wm_mask = '/HCP_data_dir/code/WM_csf_mask/final_csf_10_mask.nii.gz';
csf_mask = '/HCP_data_dir/code/WM_csf_mask/final_WM_10_mask.nii.gz';
whole_brain = '/HCP_data_dir/code/WM_csf_mask/brainmask_fs.2.nii.gz';
% set list
%fid = fopen([root_path,filesep,'code/data_name.txt'],'r');
%data_name = textscan(fid,'%d%d');
%fclose(fid)
data_name = GetFiles(root_path);
MyPar = parpool('local',8);
parfor i = 1:length(data_name)
%get the LR and RL data
data_type = GetFiles([root_path,filesep,num2str(data_name{i}),filesep,'MNINonLinear/Results/']);
data_nii_LR = [root_path,filesep,num2str(data_name{i}),filesep,'MNINonLinear/Results/',data_type{1},filesep,data_type{1},'.nii.gz'];
data_nii_RL = [root_path,filesep,num2str(data_name{i}),filesep,'MNINonLinear/Results/',data_type{2},filesep,data_type{2},'.nii.gz'];
%get file dir
[path_LR,~,~] = fileparts(data_nii_LR);
[path_RL,~,~] = fileparts(data_nii_RL);
if exist(data_nii_LR,'file') && exist(data_nii_RL,'file')
% extract signal from wm_signal by using FSL: fslmeants
if exist([path_LR,filesep,'wm_mean_signal.txt'],'file') && exist([path_LR,filesep,'csf_mean_signal.txt'],'file') && exist([path_LR,filesep,'whole_mean_signal.txt'],'file')
continue
else
cmd = ['fslmeants -i ',data_nii_LR,' -o ',path_LR,filesep,'wm_mean_signal.txt',' -m ', wm_mask];
system(cmd);
disp([num2str(data_name{i}),data_type{1},'WM already done']);
cmd = ['fslmeants -i ',data_nii_RL,' -o ',path_RL,filesep,'wm_mean_signal.txt',' -m ', wm_mask];
system(cmd);
disp([num2str(data_name{i}),data_type{2},'WM already done']);
% extract signal from csf_signal
cmd = ['fslmeants -i ',data_nii_LR,' -o ',path_LR,filesep,'csf_mean_signal.txt',' -m ', csf_mask];
system(cmd);
disp([num2str(data_name{i}),data_type{1},'csf already done']);
cmd = ['fslmeants -i ',data_nii_RL,' -o ',path_RL,filesep,'csf_mean_signal.txt',' -m ', csf_mask];
system(cmd);
disp([num2str(data_name{i}),data_type{2},'csf already done']);
% extract signal from whole_brain_signal
cmd = ['fslmeants -i ',data_nii_LR,' -o ',path_LR,filesep,'whole_mean_signal.txt',' -m ', whole_brain];
system(cmd);
disp([num2str(data_name{i}),data_type{1},'whole already done']);
cmd = ['fslmeants -i ',data_nii_RL,' -o ',path_RL,filesep,'whole_mean_signal.txt',' -m ', whole_brain];
system(cmd);
disp([num2str(data_name{i}),data_type{2},'whole already done']);
end
else
out_sub{i} = num2str(data_name{i});
end
end
out_sub(cellfun(@isempty,out_sub))=[];
out_sub = cell2table(out_sub');
writetable(out_sub,[code_path,'RSF2_out.txt'],'Delimiter',' ');
delete(MyPar)
After we got the WM, CSF and Global signal from rs-fMRI data for each subject, we need to make the *-confounds_timeseries.tsv, which is used to regress nuisance out. To make the tsv data tidy and clean, I wrote a script using R (if you don't familiar with R, you can move to HCPD-process pipeline, I use python write all code which repeat all code in this project), the script as follow:
#set root path
setwd('/HCP_data_dir/Resting_State_fMR1_2_Preprocessed')
root_path <- getwd()
# get the sub name
data_name <- dir()
file_not_tsv <- vector()
#a function to convert the degree to radians
deg2rad <- function(deg) {(deg * pi) / (180)}
#get movement txt and WM\CSF\GSR txt
for (sub in 1:length(data_name)) {
file_path <- file.path(root_path,data_name[sub],'MNINonLinear/Results')
file_name <- dir(file_path)
file_error <- sub - 1
for (type in 1:2) {
# get the motion data and tissue signal
motion_file <- file.path(file_path,file_name[type],'Movement_Regressors.txt')
csf_file <- file.path(file_path,file_name[type],'csf_mean_signal.txt')
wm_file <- file.path(file_path,file_name[type],'wm_mean_signal.txt')
whole_file <- file.path(file_path,file_name[type],'whole_mean_signal.txt')
rmsd_file <- file.path(file_path,file_name[type],'Movement_RelativeRMS.txt')
# create the confound_data.frame
confound <- data.frame(global_signal=numeric(0),
csf=numeric(0),
white_matter=numeric(0),
trans_x=numeric(0),
trans_y=numeric(0),
trans_z=numeric(0),
rot_x=numeric(0),
rot_y=numeric(0),
rot_z=numeric(0),
rmsd=numeric(0))
if(file.exists(csf_file) & file.exists(whole_file) & file.exists(wm_file)){
motion_txt <- read.table(motion_file,header = FALSE)
csf_txt <- read.table(csf_file,header = FALSE)
wm_txt <- read.table(wm_file,header = FALSE)
whole_txt <- read.table(whole_file,header = FALSE)
rmsd_txt <- read.table(rmsd_file,header = FALSE )
# create the confound file
data_len <- nrow(motion_txt)
confound[1:data_len,1] <- whole_txt
confound[1:data_len,2] <- csf_txt
confound[1:data_len,3] <- wm_txt
confound[1:data_len,4:9] <- motion_txt[,1:6]
confound[1:data_len,10] <- rmsd_txt
confound[,7] <- deg2rad(confound[,7])
confound[,8] <- deg2rad(confound[,8])
confound[,9] <- deg2rad(confound[,9])
# save data for each participant
name_data <- strsplit(file_name[type],'_')
name_data <- as.vector(name_data[1](/CuiLabCIBR/utils/wiki/1))
tsv_name <- paste(paste('sub',data_name[sub],sep = "-"),
paste('ses','func',sep='-'),
paste('task',paste(name_data[1:length(name_data)],collapse = "-"),sep="-"),
'desc-confounds_timeseries.tsv',sep = '_')
tsv_path <- file.path(file_path,file_name[type],tsv_name)
write.table(confound, file = tsv_path, row.names=FALSE, sep="\t")
}else{
file_not_tsv[file_error] <- data_name[sub]
print(paste(data_name[sub],'has not done',sep="-"))
}
rm(confound)
}
}
file_not_tsv <- as.matrix(unique(na.omit(file_not_tsv)))
and now, we have all data to make the BIDS data for xcp_abcd post process, we can use matlab to convert those files into BIDS. You need to module jo
tool first, because we need to make some .json
file:
clear;clc;
% set path
addpath(genpath('/HCP_project/code'));
root_path = '/HCP_data_dir/';
anat_file_fake = ['/MSC_data_dir/deratives/fmriprep/sub-MSC01/anat/' ...
'sub-MSC01_from-MNI152NLin6Asym_to-T1w_mode-image_xfm.h5']; %the HCP did not offer the transform file, however, this would be not used in post-process,so %make a fake file here to avoid bug from xcp_abcd.
data_type = {'Resting_State_fMR1_2_Preprocessed';};
MyPar = parpool('local',16);
parfor i = 1:length(data_type)
%set the orginal data path
orginal_data_path = [root_path,filesep,data_type{i}];
%get the name of sublist
sub_list = GetFiles(orginal_data_path);
%get the name of LR and RL
for nsub = 1:length(sub_list)
file_path = [orginal_data_path,filesep,sub_list{nsub},filesep,'MNINonLinear/Results',filesep];
data_Direct = GetFiles(file_path);
%get the nii \ cii data and tsv data
for data_d = 1:2
nii_data = fullfile([file_path,data_Direct{data_d},filesep,data_Direct{data_d},'.nii.gz']);
cii_data = fullfile([file_path,data_Direct{data_d},filesep,data_Direct{data_d},'_Atlas.dtseries.nii']);
tsv_data = dir([file_path,data_Direct{data_d},filesep,'*.tsv']);
tsv_data = fullfile([file_path,data_Direct{data_d},filesep,tsv_data.name]);
if exist(nii_data,'file') && exist(cii_data,'file') && exist(tsv_data,'file')
fmriprep_path = [root_path,filesep,'fmriprep_rest2/sub-',sub_list{nsub},...
filesep,'func',filesep];
ant_path = [root_path,filesep,'fmriprep_rest2/sub-',sub_list{nsub},...
filesep,'anat',filesep];
if ~exist(fmriprep_path,'dir')
mkdir(fmriprep_path)
mkdir(ant_path)
end
tmp = regexp(data_Direct{data_d},'_','split');
nii_name = ['sub-',sub_list{nsub},'_task-',...
tmp{2},'_acq-',tmp{3},'_space-MNI152NLin2009cAsym_desc-preproc_bold'];
cii_name = ['sub-',sub_list{nsub},'_task-',...
tmp{2},'_acq-',tmp{3},'_space-fsLR_den-91k_bold'];
tsv_name = ['sub-',sub_list{nsub},'_task-',...
tmp{2},'_acq-',tmp{3},'_desc-confounds_timeseries'];
mni2t1 =['sub-',sub_list{nsub},'_from-MNI152NLin2009cAsym_to-T1w_mode-image_xfm.h5'];
t12mni =['sub-',sub_list{nsub},'_from-T1w_to-MNI152NLin2009cAsym_mode-image_xfm.h5'];
nii_cmd = ['jo -p RepetitionTime=0.72 SkullStripped=false '...
'TaskName=',tmp{2},' > ',fmriprep_path,...
nii_name,'.json'];
system(nii_cmd);
c_json = ['jo -p RepetitionTime=0.72 SkullStripped=false '...
'TaskName=',tmp{2},' > ',fmriprep_path,...
cii_name,'.json'];
system(c_json);
cii_cmd = ['jo -p RepetitionTime=0.72 ',...
'grayordinates=91k ',...
'space="HCP grayordinates" ',...
'surface=fsLR ',...
'surface_density=32k ',...
'volume=MNI152NLin6Asym ',...
'TaskName=',tmp{2},' > ',fmriprep_path,...
cii_name,'.dtseries.json'];
system(cii_cmd);
tsv_cmd = ['jo -p LR="1 2 3"'...
,' > ',fmriprep_path,...
tsv_name,'.json'];
system(tsv_cmd);
copyfile(nii_data,[fmriprep_path,nii_name,'.nii.gz']);
copyfile(cii_data,[fmriprep_path,cii_name,'.dtseries.nii']);
copyfile(tsv_data,[fmriprep_path,tsv_name,'.tsv']);
copyfile(anat_file_fake,[ant_path,mni2t1]);
copyfile(anat_file_fake,[ant_path,t12mni]);
disp(['sub',sub_list{nsub},'already done'])
else
disp([nii_data,'does not exist'])
%out_sub{nsub} = sub_list{nsub};
end
end
end
end
delete(MyPar)
the output would look like below picture:
Step 3, Post process the HCP_ALL_Family
Now, we can use xcp_abcd to post process rs-fMRI data of HCP_ALL_Family datasetοΌ
#!/bin/bash
#SBATCH --nodes=1 # OpenMP requires a single node
#SBATCH -p q_cn
#SBATCH --ntasks=1 # Run a single serial task
#SBATCH --cpus-per-task=5
#SBATCH --mem-per-cpu=8gb
#SBATCH --time=80:00:00 # Time limit hh:mm:ss
#SBATCH --mail-user [email protected] #send the status of this job
##### END OF JOB DEFINITION #####
module load xcp_abcd/228
singularity exec -B /HCP_data_dir/HCP_data/fmriprep_rest1_NMSM/:/fmriprep\
-B /HCP_data_dir/rest1/:/out\
-B /home/.cache:/home/xcp_abcd/.cache\
-B /HCP_data_dir/rest1/:/work\
$IMG xcp_abcd /fmriprep /out participant\
-w /work --participant_label $1 --cifti -p 36P --despike --lower-bpf 0.01 --upper-bpf 0.08 --smoothing 6