Proposals for new features - SyneRBI/SIRF GitHub Wiki

This page provides links to our current proposals for new features of SIRF such that they can be discussed by the community. Not all features are listed here. Check also our issues and projects pages.

Image Geometry

Ashley Gillman, Richard Brown, David Atkinson, Ben Thomas, Johannes Mayer, Evgueni Ovtchinni, Casper da Costa-Luis, Kris Thielemans

Please add any comments or questions to https://github.com/SyneRBI/SIRF/issues/198 rather than editing directly here.

Purpose

The aim of this project is to have SIRF "aware" of image geometry. In essence, this means the ability for a SIRF image to be able to identify the position of a given voxel in DICOM LPS space. More broadly, this will allow:

  • Resampling of MR and PET data between spaces, thereby allowing propagation of information from one to another in synergistic reconstructions.
  • Reconstructions will be in the same space as vendor reconstructions, allowing comparisons.
  • Information from other image processing from vendor or other-modality reconstructions. For example, motion estimates can be modeled via registration on vendor or Gadgetron MR reconstructions, and applied in PET reconstruction.

The initial plan was to implement additional functionality into SIRF, which would wrap existing STIR and Gadgetron facilities for image geometry. Initial work has been conducted on STIR wrapping in SIRF https://github.com/SyneRBI/SIRF/pull/193.

Coordinate systems

Physical Coordinate Systems

Three important aspects define a physical coordinate system:

  1. The FRAME-OF-REFERENCE, a point which will remain stationary and is defined in image space as (0, 0, 0).
    • For most MRI, and mMR, this can be calibrated and arbitrarily set, but is usually set to the magnet iso-center.
    • This can be stationary with respect to:
      • Scanner/Gantry (PET and MR acquisition raw data is like this),
      • Bed (the image moves with bed shifts),
      • Patient (the image directions are defined w.r.t. patients left/right, posterior/anterior, superior/inferior). NB: This usually doesn't mean stationary with patient motion, but rather their approximate positioning (to nearest 90 degrees) at the beginning of the scan.
  2. The AXES-DIRECTIONS.
    • Which direction each axis points. Note that this may be slightly different between the MR and PET gantries.

It should be noted that a physical coordinate system is purely notional. Although images, by convention, represent their geometry with respect to one coordinate system or another, they could potentially represent their geometry in any physical coordinate system.

In fact, in the next section we will see we propose that STIR will be able to represent its images in two coordinate systems. An internal gantry-based one, and an external bed-based one.

(proposed) Physical Coordinate systems in SIRF

There are a number of coordinate systems used within the SIRF ecosystem:

  • SIRF itself will use a patient-based coordinate system, as per the DICOM standard.
    • The /frame of reference/ is:
      • Patient-based
      • (0, 0, 0) is not defined, but is the scanner's designated origin.
    • The axes directions for axis 0, 1 and 2 are left, posterior and supine, respectively, with respect to the patient's position in the scanner. (So a patient scanned head-first and subsequently feet-first will be roughly aligned)
  • STIR
    • Images - DICOM-LPS-interface
      • reference: Patient
      • origin: vendor-origin
      • axes-directions: LPS
    • Images - physical-coord-interface
      • frame-of-reference: Bed, (0, 0, 0) = vendor-origin
      • axes-directions: (x, y, z) = more right, more up, more into the gantry, as you face the scanner from the bed side.
    • Images (as of today's <2018-08-09 Thu> master)
      • frame-of-reference: Gantry, (0, 0, 0) = middle of the first ring
      • axes-directions: (x, y, z) = more right, more up, more into the gantry, as you face the scanner from the bed side.
    • Projection Data
      • frame-of-reference: Gantry, (0, 0, 0) = Middle of the gantry. Also, middle of the first ring for some deprecated functions.
      • axes-directions: (x, y, z) = more right, more up, more into the gantry, as you face the scanner from the bed side.
  • Gadgetron:
    • TBC

Index to Physical Coordinate Mapping

Note that the physical coordinate system is never "coded". It is rather an agreed convention.

However, the transform for each image between the voxel indices and the conventional coordinate system is coded. It is proposed to use the ITK convention to encode these:

  • origin[3]: the location of the centre of the first voxel.
  • spacing[3]: the distance between voxels in each direction.
  • direction[3][3]: each column encodes a unit-vector representing the axis direction.

From these values, a 4x4 matrix (like NIfTI's mat_44) can be calculated. Concatenating a '1' to an index, multiplying by this matrix, and truncating to the first 3 values of the resulting 4-vector will give the physical coordinate of an index.

SIRF Changes

NB: Any code in this document is pseudocode and not literal.

Ongoing discussion: https://github.com/SyneRBI/SIRF/issues/198

See also: https://github.com/SyneRBI/SIRF/pull/193

GeometricalInfo

A class, GeometricalInfo, is a container for the geometrical information. Each image class will eventually have a reference to one, but currently only PETImageData. Taking inspiration from STIR's image hierarchy, GeometricalInfo itself will not assume a voxelised geometry.

template <int num_physical_dimensions, int num_index_dimensions>
class GeometricalInfo {
public:
	typedef tVector<num_physical_dimensions, float>     Coordinate;
	typedef tVector<num_index_dimensions, unsigned int> Index;
	virtual Coordinate transform_index_to_physical_point(Index idx);
	virtual Index transform_physical_point_to_index(Coordinate coord);
};

For standard images, VoxelisedGeometricalInfo will be implemented. It's interface takes inspiration from that of ITK:

  • VoxelisedGeometricalInfo::Origin - Location of centre of first voxel in physical coordinates.
  • VoxelisedGeometricalInfo::Spacing - Distance between voxels on the Cartesian grid.
  • VoxelisedGeometricalInfo::Size - Number of voxels in each dimension.
  • VoxelisedGeometricalInfo::Direction - 3x3 matrix where columns represent the unit direction of each axis in physical coordinates.

In order to move between spaces, a resampler will be required. These classes can resample an image into another space defined by another image's GeometricalInfo. Various libraries could be wrapped in SIRF for performing this function. Current work by Richard has been on using NiftyReg.

ImageData

A new abstract class, ImageData, will be inserted into the current hierarchy, in between aContainer and the current *ImageData classes. ImageData will implement:

  • Knowing about its GeometricalInfo
  • Converting itself into some "canonical" formats
    • this will tie in with the SIRFReg proposal, but other implementations are possible. An ITK potential implementation is shown below for deomonstration purposes.

Pseudocode to demonstrate inheritance and functionality:

class ImageData : aContainer {
    // ImageData is abstract also
    virtual Data get_data();                               // set raw voxel data
    virtual void set_data(data);                           // get raw voxel data
    virtual ImageData clone();                             // copy image
    virtual GeomInfo get_patient_coord_geometrical_info()  // get geometrical info (assume voxelised)
#ifdef BUILD_NIFTYREG
    NiftyImageData to_nifty_image();                       // convert into Richard's format
    void fill_from_nifty_image(image) {                    // check this.geomInfo and nifty.geomInfo, resample, fill
        this_geom = this.get_patient_coord_geometrical_info();
        other_geom = image.get_patient_coord_geometrical_info();
        data = nifty::resample(other.get_data(), other_geom, this_geom);
        fill(data);
    }
#endif
#ifdef BUILD_ITK
    ItkImageData to_itk_image();       // Not for now, but demonstrating future opportunity
    void fill_from_nifty(data);        // resample, fill
#endif
}

class PETImageData : ImageData { /*...*/ }
class MRImageData : ImageData { /*...*/ }

// maybe?
#ifdef BUILD_NIFTYREG
class NiftyImageData : ImageData { /*...*/ }
#endif

Interface (end-user usage in Python)

# Load a STIR-reconstructed PET and 2x vendor-reconstructed MRs
pet_image = sirf.STIR.ImageData('/path/to/pet')
ute_image = sirf.nifty.ImageData('/path/to/vendor-recon')
mprage_image = sirf.nifty.ImageData('/path/to/vendor-recon')

# register our structural MPRAGE to the already-aligned UTE
motion_transform = sirf.nifty.register(moving=mprage_image, fixed=ute_image)
resampler = sirf.nifty.Resampler(interpolation='linear')
mprage_in_pet_resampled = sirf.nifty.resample(
    image=mprage_image, reference=pet_image,
    transform=motion_transform, resampler=resampler)

# make a STIR image with the structural
mprage_prior = pet_image.clone()
mprage_prior.fill(mprage_in_pet_resampled)

# mprage_prior can now be used in reconstructions

# error checking on fill that the spaces are the same
mprage_prior.fill(mprage_image)  # RuntimeError: Image spaces are different

Changes to STIR

Previously it was intended to leave STIR as unmodified as possible. After further investigation it has become clear that leaving STIR unmodified would prove problematic:

  • An image loaded by STIR (e.g., NIfTI) and then into SIRF, and then changed to NIfTI would be different.
  • Users of STIR would be "forced" into SIRF with the functionality not implemented elsewhere (although alternatives like petmr-rd-tools).

STIR has an image hierarchy for supporting non-voxelised images in the future, with base class DiscretisedDensity. The current plan is to support DICOM LPS orientation directly in this class. This will be in addition to an internal physical geometry which will be bed-based (currently gantry-based).

For details, see https://github.com/UCL/STIR/wiki/Proposals-for-new-features-major-changes#vendor-based-coordinate-system

Changes to Gadgetron

It is unknown at this stage what capabilities Gadgetron has w.r.t. geometry. If it is unable to express an image in the same DICOM-LPS space as the vendor, there will likely have to be changes made.