Ir al contenido

Documat


New reconstruction strategies for polyenergetic x-ray computer tomography

  • Autores: Cristóbal Martínez Sánchez
  • Directores de la Tesis: Monica Abella Garcia (dir. tes.) Árbol académico, Manuel Desco Menéndez (codir. tes.) Árbol académico
  • Lectura: En la Universidad Carlos III de Madrid ( España ) en 2021
  • Idioma: español
  • Tribunal Calificador de la Tesis: Jorge Ripoll Lorenzo (presid.) Árbol académico, J.V. Manjón Herrera (secret.) Árbol académico, Adam M Alessio (voc.) Árbol académico
  • Enlaces
  • Resumen
    • X-ray computed tomography (CT) provides a 3D representation of the attenuation coefficients of patient tissues, which are roughly decreasing functions of energy in the usual range of energies used in clinical and preclinical scenarios (from 30 KeV to 150 KeV). Commercial scanners use polychromatic sources, producing a beam having a range of photon energies, because no X-ray lasers exist as a usable alternative. Due to the energy dependency of the attenuation coefficients, low-energy photons are preferably absorbed, causing a shift of the mean energy of the X-ray beam to higher values; this effect is known as beam hardening. Classical reconstruction methods assume a monochromatic source and do not take into account the polychromatic nature of the spectrum, producing two artifacts in the reconstructed image: 1) cupping in large homogeneous areas and 2) dark bands between dense objects such as bone. These artifacts hinder a correct visualization of the image and the recovery of the true attenuation coefficient values.

      Several strategies can be found in the literature to compensate for the beam-hardening effect. Physical filters are generally used to pre-harden the beam before reaching the sample, but this is not enough to remove the artifacts. The simplest correction method, implemented in most commercial scanners, is water linearization, which assumes that the object in the scan field is composed of only soft tissue. It is based on a function that replaces the energy-dependent attenuation values of the water, the so-called beam-hardening function, with the corresponding monochromatic attenuation values, called the monochromatic function. One can obtain the beam-hardening function both experimentally by a calibration step with a phantom made of soft-tissue equivalent material and analytically using the knowledge of the spectrum and the mass attenuation coefficient of the water. Since there is no beam-hardening effect when the amount of tissue traversed is zero, the monochromatic function can be calculated as the derivative of the beam-hardening function at zero. Nevertheless, as this method assumes that the object is composed of only soft tissue, it leads to a compensation of the cupping in homogeneous objects but produces a suboptimal correction of the dark bands in heterogeneous regions. A fast correction of both cupping and dark bands can be performed with the so-called post-processing methods, which use the information of a segmentation obtained in a preliminary reconstruction to generate the correction parameters. Nevertheless, this segmentation may fail in low-dose scenarios, leading to an increase of the artifacts. An alternative for these scenarios is the use of iterative reconstruction methods, which incorporate a polychromatic model at the cost of a higher computational time compared to an analytical reconstruction followed by a post-processing method. All methods previously proposed in the literature require either knowledge of the X-ray spectrum, which is not always available, or the heuristic selection of some parameters, which has been shown not to be optimal for the correction of different slices in heterogeneous studies.

      This thesis is framed in a research line focused on improving radiology systems of the Biomedical Imaging and Instrumentation Group (BiiG) from the Bioengineering and Aerospace Department of Universidad Carlos III de Madrid. This research line is carried out in collaboration with the Unidad de Medicina y Cirugía experimental of Hospital Gregorio Marañón through Instituto de Investigación Sanitaria Gregorio Marañón, the Electrical Engineering and Computer Science (EECS) department of the University of Michigan, and SEDECAL, a Spanish company among the ten best world companies in medical imaging that exports medical devices to 130 countries. As part of this research line, a high-resolution micro-CT for small-animal samples was developed for small-animal samples, which operates at low voltages, leading to strong beam-hardening artifacts. This scanner allows to carry out preclinical studies, which can be divided into cross-sectional and longitudinal studies. Since cross-sectional studies consist of one acquisition at a specific point in time, the radiation dose is not an issue, allowing the use of standard-dose protocols with good image quality. In contrast, longitudinal studies consist of several acquisitions over time, so it is advisable to use low-dose protocols despite the reduction of signal to noise ratio and the risk of artifacts in the image. This thesis presents a bundle of reconstruction strategies to cope with the beam-hardening artifacts, cupping and dark bands, in different dose scenarios, overcoming the problems of methods previously proposed in the literature.

      Since image quality is not an issue for standard-dose acquisitions, post-processing was the strategy selected in this scenario due to the low computational cost compared to iterative reconstruction methods. The proposed post-processing strategy extends the well-known water-linearization method to correct both cupping and dark bands. To that end, it considers the sample to be composed of only two tissue types: bone and soft tissue. The rationale behind this assumption comes from the dependence of the attenuation properties on the energy for different tissues in the body, as most tissues behave like water and only bone differs significantly from it. The beam-hardening and monochromatic function produced by different soft tissue and bone combinations are characterized from empirical measurements, which avoids the need for tuning parameters or extra projections and backprojections of previously proposed works in the literature. The characterization of these two functions could be done analytically, as in the water-linearization method, with the knowledge of the spectrum and the mass attenuation coefficients of soft tissue and bone. However, as mentioned above, spectrum knowledge is not always available. To avoid the need of this knowledge, two different methods were explored. The first one, 2DCalBH, uses a calibration phantom made up of materials equivalent to soft tissue and bone, respectively. Along that, we propose a phantom for a small animal scanner made up of a half cylinder of PMMA as soft-tissue equivalent material (radius of 3 cm) plus one triangular prism with rounded corners of AL6082 as bone equivalent material (height of 2.5 cm and width of 6 cm) to maximize the number of possible soft tissue and bone combinations. Nevertheless, 2DCalBH may be affected if the tissues of the sample differ from those of the equivalent materials in the calibration phantom. The second method, FreeCalBH, avoids the use of equivalent materials and uses the sample as a calibration phantom.

      In both methods, the beam-hardening function is generated with the projection of a previous segmentation of bone and soft tissue as x- and y-axis and the pixel value in the projections as z-axis. In contrast to previously proposed methods, where the function is fitted to a polynomial model, we propose a logarithmic function to avoid the non-monotonically increasing values of the polynomial models. The monochromatic function is generated assuming that the beam-hardening effect is zero when the tissue traversed is close to zero, thus, using the partial derivatives of the beam-hardening function at zero. Ideally, the correction would be obtained with a linearization function that replaces the uncorrected values of the beam-hardening function with the corresponding ones of the monochromatic function. However, this relation is not injective, i.e., there are multiple combinations of the beam-hardening function that result in the same monochromatic value. To solve this non-uniqueness, the bone thickness is used as a constraint to generate multiple linearization functions obtained from fixed pairs of 1D beam-hardening and monochromatic functions. These functions are fitted by second-order polynomial regressions, using linear least squares, and the obtained coefficients are stored in a look-up table that provides the appropriate coefficients for each value of bone thickness traversed. A bone thickness spacing in the look-up table below the voxel size would prevent streak artifacts from an inaccurate selection of the correction coefficients.

      The evaluation against previously proposed correction methods with real and simulated data showed a good artifact compensation for a standard-dose scenario (cross-sectional studies), obtaining lower errors and artifacts than two classical post-processing methods. Both proposed post-processing methods, 2DCalBH and FreeCalBH, showed similar correction of the dark bands in simulated data, but evaluation on real data showed that FreeCalBH had a better overall performance. Therefore, 2DCalBH would be the preferred option for most studies, while FreeCalBH could be a good alternative when the attenuation properties of the bone in the sample greatly differs from the equivalent material used in the calibration (e.g., in the presence of osteoporosis or osteopetrosis).

      The main limitation of the proposed post-processing methods, shared by previous post-processing schemes, is the need for preliminary bone segmentation. Evaluation of low-dose studies showed that errors in this bone segmentation hinder the selection of the appropriate linearization functions, which leads to inconsistent data in the corrected projection values. In particular, streaks produced by low sampling may have similar values to bone and, thus, be included in the bone segmentation. Since an optimum post-processing method would increase the bone values as they are underestimated due to the beam-hardening effect, the streak artifacts would be enhanced.

      For longitudinal studies, where the reduction of the dose delivered to the sample is advisable, this thesis presents an iterative reconstruction strategy that incorporates the beam-hardening effect into the forward model. We propose three approaches to avoid the need for the spectrum knowledge found in previously proposed iterative methods in the literature. The first approach, 1DIterBH, explores the idea of the post-processing method proposed by Joseph and Spital, which characterizes the beam-hardening effect with the 1D function corresponding to water, already available in most scanners, plus two empirical parameters. Nevertheless, this approximation is only accurate when there are only small areas of bone in the sample. To obtain a better model approximation, the second (CalIterBH) and third (FreeIterBH) approaches empirically obtain the beam-hardening function of the soft tissue and bone, similar to 2DCalBH and FreeCalBH, respectively. As in the post-processing methods, the iterative strategy assumes that only soft tissue and bone are present in the sample. To prevent an increase in the number of unknowns and avoid the preliminary segmentation, each voxel attenuation is modeled as a mixture of bone and soft tissue by defining density dependent tissue fractions and maintaining one unknown per voxel. The thresholds of these tissue fractions functions for soft tissue and bone are extracted from the National Institute of Standard and Technologies. This object model is included in the cost function and is iteratively updated, leading to a better segmentation of the tissues along the iterations. The algorithm is based on Poisson statistics, which is usually used as the noise model in CT acquisitions. An accurate model of the physics of CT acquisition needs to account for the energy integrating the detection process and the additive detector read-out noise. On the other hand, sophisticated models often lead to more difficulties in optimizing the associated penalized likelihood. For simplicity, in this work, the measurement statistics are approximated as independently distributed Poisson random variables that can account for extra background counts caused primarily by scatter. Because data is noisy and tomography is an ill-posed problem, regularization is used by adding a penalty term to the likelihood function that controls how much the object departs from prior assumptions about image properties. In this work, a 3D roughness penalty function with the convex edge-preserving Huber potential is used. Finally, the algorithm is derived based on separable quadratic surrogates using the principles of optimization transfer. We also use an ordered subsets approximation of the algorithm derivation to increase speed. Evaluation on both simulated and real data showed that, as expected, the proposed iterative reconstruction strategy reduced the low-sampling artifacts and corrected the beam-hardening artifacts, outperforming the post-processing strategy in the low-dose scenario. 1DIterBH did not correct the beam-hardening artifacts in the whole volume with a unique set of parameter values, as expected, since it used a similar approximation to the JS method. CalIterBH and FreeIterBH achieved a global optimum correction, with slightly lower performance in the latter, similar to FreeCalBH. Despite the good beam-hardening correction obtained with the proposed iterative strategy, it requires high computational time, hindering its use for real-time applications.

      The selected materials for the calibration phantom used in 2DCalBH and FreeCalBH, previously used as soft tissue and bone equivalent materials, showed a very good artifact correction both in simulations and real data. Evaluation on simulated data also showed a good recovery of the ideal values, even for bone, which was not possible in the previous literature without the knowledge of the spectrum. Future work will evaluate the quantification in real data using exvivo experiment to find out if there is a need for more sophisticated equivalent materials.

      The final proposed method, DeepBH, based on Deep Learning, attempts to reduce the computational time while maintaining the good performance of the iterative strategies. It uses a U-net architecture since it allows to maintain the matrix size of input and output images. We use U-net++, which re-designed the classic U-net architecture connecting the encoder and decoder sub-networks. The rationale behind choosing this network is that it has been shown to produce better results than the classic U-net in medical imaging applications, where more fine details are needed. VGG was used for the encoder of U-net++ due to the performance of this convolutional neural network in a wide variety of tasks. 2D slices of eight rodent studies, five heads and three abdomens, acquired with a micro CT scanner were used to train the different models of the network. The standard-dose scenario was acquired with 360 projections in an angular span of 360 degrees, while low-dose acquisitions were generated by removing one of every two projections from these studies. Input images were obtained with FDK reconstruction for both standard- and low-dose scenarios, while target images were generated from the reconstruction of the standard dose acquisition using the proposed iterative method CalIterBH. Results in real data showed a good compensation of the beam-hardening and low-dose artifacts with a considerable reduction of time, rising the interest of further exploring this path in the future. A different model was needed for each dose scenario, probably because the small size and low variability of the training set did not allow the network to generalize the correction of the dark bands with and without the presence of low-sampling artifacts. The low variability of the training data may also be the reason why a different model was needed for each anatomical part. Future work will evaluate if an increase in the amount of training data would enable a single model to work independently of the dose scenario or the anatomical part under study.

      The incorporation of these reconstruction strategies in real scanners is straightforward, only requiring a small modification of the calibration step already implemented in commercial scanners. Compared to previous methods in the literature, the proposed reconstruction strategies do not need the knowledge of the spectrum and result in quantitative values, providing real attenuation coefficients. They also perform a good correction of the dark bands in the whole volume, independently of the combination of soft tissue and bone in each slice. Finally, the Deep-Learning strategy is able to produce a fast correction of the beam-hardening artifacts independently of the dose scenario and overcomes the memory constraints previously found in the literature.

      The methods proposed in this thesis are being transferred to the company SEDECAL for their implementation in the new generation of micro-CT scanners for preclinical research and a multipurpose C-arm for veterinary applications.


Fundación Dialnet

Mi Documat

Opciones de tesis

Opciones de compartir

Opciones de entorno