FreeSurfer - rmsouza01/AI2-Lab GitHub Wiki

FreeSurfer

FreeSurfer (FS) is an open-source neuroimaging toolkit for processing, analyzing, and visualizing human brain MR images. FS's main command recon-all can take several hours to process, so the ability to batch process many files in parallel using clusters, such as ARC is very important to analyze neuroimaging data in a timely manner.

In this tutorial, we will teach you three processing recipes:

  1. Batch processing files using the recon-all command;

  2. Generate a spreadsheet with volumetric measurements from the files you processed using recon-all;

  3. Convert FS's segmentation masks back to native space and Nifti file format.

Batch processing using the recon-all command

In our GitHub repository, we have a slurm script named freesurfer_nifti.slurm that details part of the procedure, such as setting paths and typing the proper FS commands that you want to run.

The main thing for batch processing is creating a text file where each line contains the file path of all the files that we want to process. In this tutorial, we assume that you have copied all the files that you want to process and are connected to the ARC cluster. Let's look at how to do this through the following example illustrating a folder and file structure.

In the folder /work/harris_lab/roberto/Scans we have three subfolders (GE, Philips, Siemens) that contains multiple nifti files to be processed. The commands to create the text file we need are the following:

foo@bar:~$ touch subjects_list.txt

foo@bar:~$ ls /work/harris_lab/roberto/Scans/*/*.nii.gz > subjects_list.txt

The touch command creates a text file located in the current folder that is named subjects_list.txt. Then, the ls command list all the files in the subfolders and redirects the output (>) to the subjects_list.txt file that we created (see the result below).

Now, all we need to do is submit the job to the cluster. Go to the path where the script is located and type into the terminal:

foo@bar:~$ sbatch --array=1-12 freesurfer_nifti.slurm

The command sbatch submits the job and the input --array specifies the lines in the subjects_list.txt that contain the files that you want to process.

Generate a spreadsheet with volumetric measurements

After we have processed all the files we wanted to process using recon-all, it is time to extract measurements, such as volumes and cortical thicknesses from these processes. In our GitHub repository, we have a slurm script named extract_aseg_stats.slurm that details part of the procedure, such as setting paths and typing the proper FS commands that you want to run to extract the measurements that you want to extract.

Now, all we need to do is submit the job to the cluster. Go to the path where the script is located and type into the terminal:

foo@bar:~$ sbatch extract_aseg_stats.slurm

Convert FS's segmentation masks back to native space and Nift file format

Another useful recipe is to convert FS's results back to a more friendly format, such as nifti (FS has its own file format .mgz), and back to the native space of the input image (FS registers the input image at the start of recon-all).

In our GitHub repository, we have a slurm script named back2native_nifti.slurm that details part of the procedure, such as setting paths, the files you want to convert back to native space and a specific file format and typing the proper FS commands in this script.

Now, all we need to do is submit the job to the cluster. Go to the path where the script is located and type into the terminal:

foo@bar:~$ sbatch back2native_nifti.slurm

Monitoring and managing the jobs

For monitoring the statuses of the different jobs, we can use the squeue command:

foo@bar:~$ squeue -u roberto.medeirosdeso

By specifying your username, the console will only list the jobs submitted by you.

For managing jobs, yo can use the scancel command:

foo@bar:~$ scancel JOB_ID