ANTs Registration - Quantitative-Physiological-Imaging-Lab/documentation GitHub Wiki

A walkthrough of an example ANTs registration, with options explained in detail:

 antsRegistration -d 3 \
                  --float 1 \
                  --verbose 1 \
                  -u 1 \
                  -w [ 0.01,0.99 ] \
                  -z 1 \
                  -r [ $fixed,$moving] \
                  -t Rigid[ 0.1 ] \
                  -m MI[$fixed,$moving,1,32,Regular,0.25 ] \
                  -c [ 1000x500x250x0,1e-6,10 ] \
                  -f 6x4x2x1 \
                  -s 4x2x1x0 \
                  -t Affine[ 0.1 ] \
                  -m MI[$fixed,$moving,1,32,Regular,0.25 ] \
                  -c [ 1000x500x250x0,1e-6,10 ] \
                  -f 6x4x2x1 \
                  -s 4x2x1x0 \
                  -t SyN[ 0.1,3,0 ] \
                  -m CC[$fixed,$moving,1,4 ] \
                  -c [ 100x100x70x20,1e-9,10 ] \
                  -f 6x4x2x1 \
                  -s 3x2x1x0 \
                  -o $output

Introduction

Registration (sometimes called "Normalization") brings one image to match another image, such that the same voxels refers roughly to the same structure in both brains. Often the fixed (or target) image is a template, but you can register any two images with each other, there is nothing special in a template from the computation perspective. [1]

ANTs is a user-level registration application meant to utilize classes in ITK v4.0 and later. The user can specify any number of "stages" where a stage consists of a

  • transform;
  • an image metric;
  • iterations,
  • shrink factors,
  • and smoothing sigmas for each level.

Note that dimensionality, metric, transform, output, convergence, shrink-factors, and smoothing-sigmas parameters are mandatory. In general, 0 after a parameter means the parameter is turned off, while 1 means that it is turned on. Note that words starting with $ are variables we have defined before.

Registration is an iterative process, where the moving image (in the image below the template) is transformed and compared via a similarity measure (cross correlation (CC) or mutual information (MI)) to the fixed image (Target)

Registration algorithms:

To achieve optimal registration, a set of algorithms are used in ANTs: rigid, then affine, then SyN.

A "rigid" registration does not deform or scale the brain, it only rotates and moves it (the whole image is considered a rigid object).

An "affine" registration allows not only rotation and motion, but also shearing and scaling. This further allows to match brains in size.

The "rigid" and "affine" registrations are also called linear registrations, because each point of the image depends on the motion in space of the other points. Linear registrations help only superficially, you can imagine that gyri and sulci still remain misaligned.

To achieve proper registration of individual gyri and sulci we need non-linear registrations (in the image called elastic). There are several non-linear algorithms in ANTs, but we typically use SyN, one of the top performing algorithms (see Klein 2009).

Inline code explained

-d or --dimensionality

antsRegistration -d 3 \

This option forces the image to be treated as a specified-dimensional image. If not specified, the dimensionality from the input image is used

--float

-float 1 \

Use 'float' (-float 1) instead of 'double' (-float 0) for computations

-v or --verbose

--verbose 1 \

With verbose on, diagnostic info and any error messages will be displayed in the terminal, e.g. the iteration and convergence of each step, or why an error occurred. By default verbose is turned off.

verbose on = --verbose 1

verbose off = --verbose 0

-u or --use-histogram-matching

-u 1 \

Histogram matching is a pre-processing step, transforming the input intensities such that the histogram are matched as much as possible between moving and target images. It's designed to make registration work better but it is independent of the alignment of the two images.

Set to 0 if registering across modalities (T1 on T2) and 1 for within modalities.

-w or --winsorize-image-intensities

-w [ 0.01,0.99 ] \

-w [lowerQuantile,upperQuantile]

Winsorize data based on specified quantiles to deal with outlier voxels. Clips values <1/1000 and >990/1000. In other words, it cuts off the highest and lowest 1% and the of our intensities, so that the intensity range is from 1-99 % of the intensities.

Range can be restricted for bad images, but be careful because it may destroy contrast in image. This helps because images may have a few voxels with high value (e.g. artefacts) that impact badly the registration

-z or --collapse-output-transforms

 -z 1 \

Boolean variable, by default set to true (1). Collapse output transforms. Specifically, enabling this option combines all adjacent transforms where possible. All adjacent linear transforms (e.g. rigid and affine) are written to disk as one file in the forman itk affine transform (called xxxGenericAffine.mat).

Similarly, all adjacent displacement field transforms are combined when written to disk (e.g. xxxWarp.nii.gz and xxxInverseWarp.nii.gz (if available)). Also, an output composite transform including the collapsed transforms is written to the disk (called outputCollapsed(Inverse)Composite).

-r or --initial-moving-transform

-r [ $fixed,$moving] \

-r [initialTransform,useInverse] [fixedImage,movingImage,initializationFeature]

Registration works in real coordinates given by the scanner. So images can start quite far from each other (per analogy, one in New York, one in London). An initial move is required to bring the images roughly in the same space (close to each other). Options are:

0 - match by mid-point (i.e., center voxel of one image will be brought in line with center voxel of the other)

1 - match by center of mass

2 - match by point of origin (i.e. coordinates 0,0,0)

The order of the transforms is stack-esque in that the last transform specified on the command line is the first to be applied. [initialTransform,] is optional. One can point to a .mat file obtained with antsAI, but there is no need. The AI solution is implemented in antsCortThicknes.sh and runs an affine with several random changes to check if one "unsual" solution is better. Useful if there are strong orientation issues (i.e., moving image is flipped inferior-superior)


Transformations

Each transformation needs a -t, -m, -c, -f, and -s flag, before starting the next transformation.

Beside going through registration algorithms, ANTs improves the registration within each algorithm gradually at different resolutions, called levels. The idea is to start with "blurry" images (low resolution, highly smoothed), register those to each other, then go to the next step, with a sharper higher resolution version, and so on. This is the reason why you see 1000x500x250x100 in registration calls, it means there will be 4 levels. The numbers show how many iterations for each level. The above antsRegistration call runs a rigid, an affine, and a SyN registration. It is the exact call made by the script antsRegistrationSyN.sh. The following is a line by line comment of what each line of the call does.

-t or --transform

-t Rigid[ 0.1 ] \

ANTs has in total 14 transform options available. All options require a gradientStep as parameter.

updateFieldVarianceInVoxelSpace - totalFieldVarianceInVoxelSpace - By adding a penalty here, we smooth the deformation computed on the "total" gradient field. The smoothing here, therefore, is applied on all the deformations computed from the beginning (i.e., at this and all previous SyN iterations). In principle, smoothing of the update field can be viewed as fluid-like registration whereas smoothing of the total field can be viewed as elastic registration.

  1. Rigid[gradientStep]

    gradientStep: A gradientStep expresses how big the linear shifts will be, in other words how much each point can move after each iteration. Since many iterations will be run, small steps can be chosen to get to the best solution, rather than doing big jumps. If gradientStep is too big registration will finish quickly, but results may not be as optimal. If gradientStep is too small it will take longer to converge (i.e. more iterations). After each iteration, a gradient field is computed, which indicates how each point (or voxel) will shift in space. This small deformation (or "updated" gradient field) is combined with previous updates to form a "total" gradient deformation. In general optimal values for gradientStep are 0.1-0.25.

  2. Affine[gradientStep]

    gradientStep: A gradientStep expresses how big the linear shifts will be, in other words how much each point can move after each iteration. Since many iterations will be run, small steps can be chosen to get to the best solution, rather than doing big jumps. If gradientStep is too big registration will finish quickly, but results may not be as optimal. If gradientStep is too small it will take longer to converge (i.e. more iterations). After each iteration, a gradient field is computed, which indicates how each point (or voxel) will shift in space. This small deformation (or "updated" gradient field) is combined with previous updates to form a "total" gradient deformation. In general optimal values for gradientStep are 0.1-0.25.

  3. CompositeAffine[gradientStep]

    gradientStep: A gradientStep expresses how big the linear shifts will be, in other words how much each point can move after each iteration. Since many iterations will be run, small steps can be chosen to get to the best solution, rather than doing big jumps. If gradientStep is too big registration will finish quickly, but results may not be as optimal. If gradientStep is too small it will take longer to converge (i.e. more iterations). After each iteration, a gradient field is computed, which indicates how each point (or voxel) will shift in space. This small deformation (or "updated" gradient field) is combined with previous updates to form a "total" gradient deformation. In general optimal values for gradientStep are 0.1-0.25.

  4. Similarity[gradientStep]

    gradientStep: A gradientStep expresses how big the linear shifts will be, in other words how much each point can move after each iteration. Since many iterations will be run, small steps can be chosen to get to the best solution, rather than doing big jumps. If gradientStep is too big registration will finish quickly, but results may not be as optimal. If gradientStep is too small it will take longer to converge (i.e. more iterations). After each iteration, a gradient field is computed, which indicates how each point (or voxel) will shift in space. This small deformation (or "updated" gradient field) is combined with previous updates to form a "total" gradient deformation. In general optimal values for gradientStep are 0.1-0.25.

  5. Translation[gradientStep]

    gradientStep: A gradientStep expresses how big the linear shifts will be, in other words how much each point can move after each iteration. Since many iterations will be run, small steps can be chosen to get to the best solution, rather than doing big jumps. If gradientStep is too big registration will finish quickly, but results may not be as optimal. If gradientStep is too small it will take longer to converge (i.e. more iterations). After each iteration, a gradient field is computed, which indicates how each point (or voxel) will shift in space. This small deformation (or "updated" gradient field) is combined with previous updates to form a "total" gradient deformation. In general optimal values for gradientStep are 0.1-0.25.

  6. BSpline[gradientStep,meshSizeAtBaseLevel]

    gradientStep: A gradientStep expresses how big the linear shifts will be, in other words how much each point can move after each iteration. Since many iterations will be run, small steps can be chosen to get to the best solution, rather than doing big jumps. If gradientStep is too big registration will finish quickly, but results may not be as optimal. If gradientStep is too small it will take longer to converge (i.e. more iterations). After each iteration, a gradient field is computed, which indicates how each point (or voxel) will shift in space. This small deformation (or "updated" gradient field) is combined with previous updates to form a "total" gradient deformation.

    For non-linear transformations (e.g. SyN) the shift of each point is computed separately, high values here may also increase high frequency deformations (i.e., each point going its own way). Because each point can follow its own path, non-realistic deformations can occur, which may make images look like teared apart. To resolve this issue we add a little penalty, such that shifts are not considered independently at each point. In general optimal values for gradientStep are 0.1-0.25.

    meshSizeAtBaseLevel:

  7. GaussianDisplacementField[gradientStep,updateFieldVarianceInVoxelSpace,totalFieldVarianceInVoxelSpace]

    gradientStep: A gradientStep expresses how big the linear shifts will be, in other words how much each point can move after each iteration. Since many iterations will be run, small steps can be chosen to get to the best solution, rather than doing big jumps. If gradientStep is too big registration will finish quickly, but results may not be as optimal. If gradientStep is too small it will take longer to converge (i.e. more iterations). After each iteration, a gradient field is computed, which indicates how each point (or voxel) will shift in space. This small deformation (or "updated" gradient field) is combined with previous updates to form a "total" gradient deformation.

    For non-linear transformations (e.g. SyN) the shift of each point is computed separately, high values here may also increase high frequency deformations (i.e., each point going its own way). Because each point can follow its own path, non-realistic deformations can occur, which may make images look like teared apart. To resolve this issue we add a little penalty, such that shifts are not considered independently at each point. In general optimal values for gradientStep are 0.1-0.25.

    updateFieldVarianceInVoxelSpace: By adding a penalty here, we smooth the deformation computed on the "updated" gradient field, before this is added to previous deformations to form the "total" gradient field. Thus, for each point the deformation of neighboring points is taken into account as well, which avoids too much independent moving of points at each iteration (i.e., a point cannot move 2 voxels away in one direction if all it's neighbors are moving 0.1 voxels away in the other direction).

    totalFieldVarianceInVoxelSpace:

  8. BSplineDisplacementField[gradientStep,updateFieldMeshSizeAtBaseLevel, totalFieldMeshSizeAtBaseLevel=0, splineOrder=3]

    gradientStep: A gradientStep expresses how big the linear shifts will be, in other words how much each point can move after each iteration. Since many iterations will be run, small steps can be chosen to get to the best solution, rather than doing big jumps. If gradientStep is too big registration will finish quickly, but results may not be as optimal. If gradientStep is too small it will take longer to converge (i.e. more iterations). After each iteration, a gradient field is computed, which indicates how each point (or voxel) will shift in space. This small deformation (or "updated" gradient field) is combined with previous updates to form a "total" gradient deformation.

    For non-linear transformations (e.g. SyN) the shift of each point is computed separately, high values here may also increase high frequency deformations (i.e., each point going its own way). Because each point can follow its own path, non-realistic deformations can occur, which may make images look like teared apart. To resolve this issue we add a little penalty, such that shifts are not considered independently at each point. In general optimal values for gradientStep are 0.1-0.25.

    updateFieldMeshSizeAtBaseLevel:

    totalFieldMeshSizeAtBaseLevel:

    splineOrder:

  9. TimeVaryingVelocityField[gradientStep, numberOfTimeIndices, updateFieldVarianceInVoxelSpace, updateFieldTimeVariance, totalFieldVarianceInVoxelSpace, totalFieldTimeVariance] numberOfTimeIndices:

    gradientStep: A gradientStep expresses how big the linear shifts will be, in other words how much each point can move after each iteration. Since many iterations will be run, small steps can be chosen to get to the best solution, rather than doing big jumps. If gradientStep is too big registration will finish quickly, but results may not be as optimal. If gradientStep is too small it will take longer to converge (i.e. more iterations). After each iteration, a gradient field is computed, which indicates how each point (or voxel) will shift in space. This small deformation (or "updated" gradient field) is combined with previous updates to form a "total" gradient deformation.

    For non-linear transformations (e.g. SyN) the shift of each point is computed separately, high values here may also increase high frequency deformations (i.e., each point going its own way). Because each point can follow its own path, non-realistic deformations can occur, which may make images look like teared apart. To resolve this issue we add a little penalty, such that shifts are not considered independently at each point. In general optimal values for gradientStep are 0.1-0.25.

    updateFieldVarianceInVoxelSpace: By adding a penalty here, we smooth the deformation computed on the "updated" gradient field, before this is added to previous deformations to form the "total" gradient field. Thus, for each point the deformation of neighboring points is taken into account as well, which avoids too much independent moving of points at each iteration (i.e., a point cannot move 2 voxels away in one direction if all it's neighbors are moving 0.1 voxels away in the other direction).

    updateFieldTimeVariance:

    totalFieldVarianceInVoxelSpace:

    totalFieldTimeVariance:

  10. TimeVaryingBSplineVelocityField[gradientStep, velocityFieldMeshSize, numberOfTimePointSamples=4, splineOrder=3]

    gradientStep: A gradientStep expresses how big the linear shifts will be, in other words how much each point can move after each iteration. Since many iterations will be run, small steps can be chosen to get to the best solution, rather than doing big jumps. If gradientStep is too big registration will finish quickly, but results may not be as optimal. If gradientStep is too small it will take longer to converge (i.e. more iterations). After each iteration, a gradient field is computed, which indicates how each point (or voxel) will shift in space. This small deformation (or "updated" gradient field) is combined with previous updates to form a "total" gradient deformation.

    For non-linear transformations (e.g. SyN) the shift of each point is computed separately, high values here may also increase high frequency deformations (i.e., each point going its own way). Because each point can follow its own path, non-realistic deformations can occur, which may make images look like teared apart. To resolve this issue we add a little penalty, such that shifts are not considered independently at each point. In general optimal values for gradientStep are 0.1-0.25.

    velocityFieldMeshSize:

    numberOfTimePointSamples:

    splineOrder:

  11. SyN[gradientStep, updateFieldVarianceInVoxelSpace=3, totalFieldVarianceInVoxelSpace=0]

    gradientStep: A gradientStep expresses how big the linear shifts will be, in other words how much each point can move after each iteration. Since many iterations will be run, small steps can be chosen to get to the best solution, rather than doing big jumps. If gradientStep is too big registration will finish quickly, but results may not be as optimal. If gradientStep is too small it will take longer to converge (i.e. more iterations). After each iteration, a gradient field is computed, which indicates how each point (or voxel) will shift in space. This small deformation (or "updated" gradient field) is combined with previous updates to form a "total" gradient deformation.

    For non-linear transformations (e.g. SyN) the shift of each point is computed separately, high values here may also increase high frequency deformations (i.e., each point going its own way). Because each point can follow its own path, non-realistic deformations can occur, which may make images look like teared apart. To resolve this issue we add a little penalty, such that shifts are not considered independently at each point. In general optimal values for gradientStep are 0.1-0.25.

    updateFieldVarianceInVoxelSpace: By adding a penalty here, we smooth the deformation computed on the "updated" gradient field, before this is added to previous deformations to form the "total" gradient field. Thus, for each point the deformation of neighboring points is taken into account as well, which avoids too much independent moving of points at each iteration (i.e., a point cannot move 2 voxels away in one direction if all it's neighbors are moving 0.1 voxels away in the other direction).

    totalFieldVarianceInVoxelSpace:

  12. BSplineSyN[gradientStep,updateFieldMeshSizeAtBaseLevel, totalFieldMeshSizeAtBaseLevel=0, splineOrder=3]

    gradientStep: A gradientStep expresses how big the linear shifts will be, in other words how much each point can move after each iteration. Since many iterations will be run, small steps can be chosen to get to the best solution, rather than doing big jumps. If gradientStep is too big registration will finish quickly, but results may not be as optimal. If gradientStep is too small it will take longer to converge (i.e. more iterations). After each iteration, a gradient field is computed, which indicates how each point (or voxel) will shift in space. This small deformation (or "updated" gradient field) is combined with previous updates to form a "total" gradient deformation.

    For non-linear transformations (e.g. SyN) the shift of each point is computed separately, high values here may also increase high frequency deformations (i.e., each point going its own way). Because each point can follow its own path, non-realistic deformations can occur, which may make images look like teared apart. To resolve this issue we add a little penalty, such that shifts are not considered independently at each point. In general optimal values for gradientStep are 0.1-0.25.

    updateFieldMeshSizeAtBaseLevel:

    totalFieldMeshSizeAtBaseLevel:

    splineOrder:

  13. Exponential[gradientStep,updateFieldVarianceInVoxelSpace,velocityFieldVarianceInVoxelSpace, numberOfIntegrationSteps]

    gradientStep: A gradientStep expresses how big the linear shifts will be, in other words how much each point can move after each iteration. Since many iterations will be run, small steps can be chosen to get to the best solution, rather than doing big jumps. If gradientStep is too big registration will finish quickly, but results may not be as optimal. If gradientStep is too small it will take longer to converge (i.e. more iterations). After each iteration, a gradient field is computed, which indicates how each point (or voxel) will shift in space. This small deformation (or "updated" gradient field) is combined with previous updates to form a "total" gradient deformation.

    For non-linear transformations (e.g. SyN) the shift of each point is computed separately, high values here may also increase high frequency deformations (i.e., each point going its own way). Because each point can follow its own path, non-realistic deformations can occur, which may make images look like teared apart. To resolve this issue we add a little penalty, such that shifts are not considered independently at each point. In general optimal values for gradientStep are 0.1-0.25.

    updateFieldVarianceInVoxelSpace: By adding a penalty here, we smooth the deformation computed on the "updated" gradient field, before this is added to previous deformations to form the "total" gradient field. Thus, for each point the deformation of neighboring points is taken into account as well, which avoids too much independent moving of points at each iteration (i.e., a point cannot move 2 voxels away in one direction if all it's neighbors are moving 0.1 voxels away in the other direction).

    velocityFieldVarianceInVoxelSpace:

    numberOfIntegrationSteps:

  14. BSplineExponential[gradientStep,updateFieldMeshSizeAtBaseLevel, velocityFieldMeshSizeAtBaseLevel=0, numberOfIntegrationSteps, splineOrder=3]

    gradientStep: A gradientStep expresses how big the linear shifts will be, in other words how much each point can move after each iteration. Since many iterations will be run, small steps can be chosen to get to the best solution, rather than doing big jumps. If gradientStep is too big registration will finish quickly, but results may not be as optimal. If gradientStep is too small it will take longer to converge (i.e. more iterations). After each iteration, a gradient field is computed, which indicates how each point (or voxel) will shift in space. This small deformation (or "updated" gradient field) is combined with previous updates to form a "total" gradient deformation.

    For non-linear transformations (e.g. SyN) the shift of each point is computed separately, high values here may also increase high frequency deformations (i.e., each point going its own way). Because each point can follow its own path, non-realistic deformations can occur, which may make images look like teared apart. To resolve this issue we add a little penalty, such that shifts are not considered independently at each point. In general optimal values for gradientStep are 0.1-0.25.

    updateFieldMeshSizeAtBaseLevel:

    velocityFieldMeshSizeAtBaseLevel:

    velocityFieldMeshSizeAtBaseLevel:

    numberOfIntegrationSteps:

    splineOrder

Several . The gradientStep or learningRate characterizes the gradient descent optimization and is scaled appropriately for each transform using the shift scales estimator. Subsequent parameters are transform-specific and can be determined from the usage. For the B-spline transforms one can also specify the smoothing in terms of spline distance (i.e. knot spacing).

-m

-m MI[$fixed,$moving,1,32,Regular,0.25 ] \

-c

-c [ 1000x500x250x0,1e-6,10 ] \

--f

-f 6x4x2x1 \

-s

-s 4x2x1x0 \

So in summary

-t Rigid[ 0.1 ] \
-m MI[$fixed,$moving,1,32,Regular,0.25 ] \
-c [ 1000x500x250x0,1e-6,10 ] \
-f 6x4x2x1 \
-s 4x2x1x0 \

means, that a rigid transformation with a gradient step of 0.1, will be performed. We use mutual information (MI) as a costfunction. The rigid transformation itself contains of 4 levels:

In the first level maximal 1000 are performed or until the last 10 iterations have no bigger difference than 1e-6. The images have a low resolution (e.g. 1x1x1mm before will turn to 6x6x6mm) and is additionally smoothed with a 4mm Gaussian kernel. This results in a very blurry image. This step is done to get a really coarse alignment of the moving and the fixed image.

In the second level the image is slightly less blurry. Maximal 500 iterations will be performed, or until the last 10 iterations have a smaller difference than 1e-6. The resolution is now 4 times lower than the original input images are. Additionally, the smoothing kernel is reduced to 2mm.

The third level has maximal 250 iterations, a resolution 2 times lower that the input images with an applied smoothing kernel if 1mm.

The fourth and final level will be skipped since the iteration was set to 0. (???)

                  -t Affine[ 0.1 ] \
                  -m MI[$fixed,$moving,1,32,Regular,0.25 ] \
                  -c [ 1000x500x250x0,1e-6,10 ] \
                  -f 6x4x2x1 \
                  -s 4x2x1x0 \
                  -t SyN[ 0.1,3,0 ] \
                  -m CC[$fixed,$moving,1,4 ] \
                  -c [ 100x100x70x20,1e-9,10 ] \
                  -f 6x4x2x1 \
                  -s 3x2x1x0 \
                  -o $output

[1] https://github.com/ANTsX/ANTs/wiki/Anatomy-of-an-antsRegistration-call

⚠️ **GitHub.com Fallback** ⚠️