Image Registration

In medical applications, multiple images are acquired from the same subject at different times or from different subjects. The critical stage for utilizing these images is to align them in order to visualize the combined information. Image registration includes the processing to resolve the mapping between images so that the features or structures in one image correspond to those in the other image. The transformation between two image scenes can be of either rigid or non-rigid. Rigid body transformation has six degrees of freedom in three dimensions, i.e. three translations and three rotations. For most body organs, the motion is non-rigid and requires more degrees of freedom to accurately describe tissue motion. Therefore, deformable registration becomes necessary for many image applications. One example is the registration of pre- and post-contrast enhanced breast MRI images. Deformable registration is required for this application as soft tissue, such as breast tissue, always undergoes non-rigid motion between images. Other similar applications can be found during medical imaging diagnosis where different modality images or the same modality at different times, are acquired and require deformable registration because of the non-rigid tissue deformation between images. Examples include heart imaging and chest PET-CT imaging where non-rigid motion can be a major tissue motion source. In image-guided radiation therapy, because of the type of treatment or patient respiration, a non-rigid shape or position change in an organ is unavoidable. Deformable image registration is critical for delivering an appropriate radiation dose and in order to avoid the damage to adjacent healthy tissue. An example is image-guided radiation therapy of prostate tumors or tumors in other organs.

Thin-plate Spline (TPS) Deformable Image Registration

We created and evaluated an almost fully automated, 3D non-rigid registration algorithm using mutual information and a thin-plate spline (TPS) transformation for MR images of the prostate and pelvis. In the first step, an automatic rigid-body registration with special features was used to capture the global transformation. In the second step, local feature points were registered. An operator entered only five feature points (FPs) located at the prostate center, in the left and right hip joints, and in the left and right distal femurs. The program automatically determined and optimized other FPs on the external pelvic skin surface and along the femurs. More than 600 control points were used to establish a TPS transformation for deformation of the pelvic region and the prostate. Ten volume pairs were acquired from three volunteers in the diagnostic (supine) and treatment positions (supine with legs raised). Various visualization techniques showed that warping rectified the significant pelvic misalignment caused by the rigid body method. Gray-value measures of registration quality, including mutual information, correlation coefficient, and intensity difference, all improved with warping. The distance between prostate 3D centroids was 0.7 ± 0.2 mm following warping compared to 4.9 ± 3.4 mm with rigid-body registration. The semiautomatic non-rigid registration works better than rigid body registration when the patient position is changes significantly between acquisitions; it could be a useful tool for many applications of prostate diagnosis and therapy.

Figure 1. Comparison of non-rigid and rigid-body registration for volumes acquired in the treatment and diagnostic positions. Image (a) is from the reference volume acquired in the treatment position and where the prostate is manually segmented. Images in the left and right columns are from the floating volume acquired in the diagnostic position following rigid-body and non-rigid registration, respectively. To show potential mismatch, the pros tate contour from the reference in (a) is copied to both (b) and (c) and is magnified as the dashed contours in (d) and (e). The movement of the prostate to the posterior is corrected with warping (e) but not with rigid-body registration (d). Pelvic boundaries manually segmented from the reference show significant misalignment with the rigid body (f) that is greatly improved with warping (g). All registration experiments are performed in three-dimensions (3D). Transverse slices covering the prostate are selected from the 3D image volumes.

B-spline Deformable Image Registration

We implemented mutual information B-spline deformation registration algorithms. Mutual information does not assume a linear intensity relationship between images and has been used for registration of images of either the same modality or different modalities. A motion constraint is optimized in order to achieve a smooth function instead of an unrealistic result. A gradient-based minimization method is used to find the B-splines control coefficients for optimal transformation. Multiresolution strategy is applied to register the image from the downsampled low-resolution image to the original high-resolution images. The number of control points also hierarchally increase along the multiresolution framework. The deformation computed at low resolution is the initial transformation for the optimization at the high resolution.

Figure 2. Deformable registration of 3D brain MR data. With deformable transformation, Image (a) is transformed into Image (b) with shape deformation in order to match the reference image (c).

Finite Element Model-based Deformable Registration

For soft tissue, e.g. tumor, registration, we developed a finite element model (FEM)-based deformable registration method. We have applied this FEM registration method to tumor MRI and PET images. In the first step, we applied a rigid registration algorithm to align the cropped MRI and PET images using three translations and three rotations. After registration, we manually segmented the tumor slice-by-slice on both high-resolution MRI and PET image volumes. We then applied the deformable registration algorithm. The registration approach deforms the tumor surface from the MRI volume toward that from the PET image. The displacements at the surface vertices are the force that drives the elastic surface from MRI toward that from the PET image. The tumor was modeled as a linear isotopic elastic material. The FEM model was used to infer volumetric deformation of the tumor from the surface. The force is then integrated over each element and is distributed over the nodes belonging to the element using its shape functions. After obtaining the displacement field for all vertices, we used a linear interpolation to obtain the deformed image volume of the tumor.

Figure 3. Three-dimensional meshes of a tumor. (a) Tumor segmented from a high-resolution MR volume. (b) Same tumor from the corresponding microPET emission images. (c) Color overly of the tumor from MRI (yellow) and PET (red). The tumor deformed during the two imaging sessions.

Two-dimensional (2D) Image Registration Software

The 2D registration program can align 2D images using rigid transformation. It integrates manual and automatic registration. It has the following features: (1) Register two 2D images automatically by intensity-based methods, (2) Register two 2D images manually. (3) Load and register floating multiple slices to the reference slice and display the registration parameters for each floating slice.

Figure 4. The 2D image registration software.

Three-dimensional (3D) Image Registration Software

The 3D registration program can align image volumes from CT, PET, MRI, and/or other imaging modalities. It has the following features: (1) Resize the floating volume according to the reference volume size and resolution so as to keep the same volume resolution in every direction for the two volumes; (2) Automatic registration based on mutual information; (3) Manual registration using 3D translation and rotation; (4) It includes two deformable registration approaches; (5) Displays volume in all directions as well as the fusion results; and (6) Displays the location line in 3D space and easily locates each point in 3D space. The user interface is also very friendly and direct.

Figure 5. The 3D image registration software.

3D to 2D Registration

This project implements a three-dimensional (3D) to two-dimensional (2D) registration for computed tomography (CT) and dual-energy digital radiography (DR) for the detection of coronary artery calcification. In order to utilize CT as the “gold standard” to evaluate the ability of DR images to detect and localize calcium, we developed an automatic intensity-based 3D-to-2D registration method for 3D CT volumes and 2D DR images. To generate digital rendering radiographs (DRR) from the CT volumes, we developed three projection methods, including Gaussian-weighted projection, threshold-based projection, and average-based projection, were developed. Cross correlation (NCC) and normalized mutual information (NMI) are used as the similarity measurement.

The software program has the following capabilities: (1) Simulate DR images from the reference CT volume at any angle and generate the projection image using Gaussian-weighted projection, threshold-based projection, and average-based projection methods; and (2) Perform registration between the original DR images and the DRR image reconstructed from the CT volume.

Figure 6. Graphic user interface (GUI) for the 3D to 2D registration software.

Slice to Volume Registration

Slice to volume registration is used to register a two-dimensional image slice to a three-dimensional image volume. In this study, we registered live-time interventional magnetic resonance imaging (iMRI) slices with a previously obtained, high resolution MRI volume which in turn can be registered with a variety of functional images, e.g. PET and SPECT, for tumor targeting. We created and evaluated a slice-to-volume registration algorithm with special features for its potential use in iMRI-guided, radiofrequency (RF) thermal ablation. The algorithm features included a multi-resolution approach, two similarity measures, and automatic restarting in order to avoid local minima. Imaging experiments were performed on volunteers using a conventional diagnostic MR scanner and an interventional MRI system under realistic conditions. Both high-resolution MR volumes and actual iMRI image slices were acquired from the same volunteers. Actual and simulated iMRI images were used to test the dependence of slice-to-volume registration on image noise, coil inhomogeneity, and RF needle artifacts. To quantitatively assess registration, we calculated the mean voxel displacement over a volume of interest between slice-to-volume registration and volume-to-volume registration, which was previously shown to be quite accurate. More than 800 registration experiments were performed. For transverse image slices covering the prostate, the slice-to-volume registration algorithm was 100% successful with an error of < 2 mm, and the average and standard deviation was only 0.4 mm ± 0.2 mm. Visualizations such as combined sector display and contour overlay showed excellent registration of the prostate and other organs throughout the pelvis. Error was greater when an image slice was obtained at other orientations and positions, mostly because of inconsistent image content such as that obtained from variable rectal and bladder filling. These preliminary experiments indicate that MR slice-to-volume registration is sufficiently accurate to be able to aid image-guided therapy. [caption id="attachment_91" align="aligncenter" width="566"] Figure 7. The slice to volume registration software.[/caption]