Ir al contenido

Documat


Resumen de Visualización multimodal con texturas 3d

Pascual Abellan Moreno

  • Modern medical imaging devices such as computed tomography and magnetic resonance provide high-resolution cross-sectional images of scanned anatomical regions. Other medical devices such as Positron Emission Tomography and Functional Magnetic Resonances provide images of the body's functional activity. These two types of images can be fused in order to construct 3D multimodal graphical models. This thesis addresses the visualization of these models. Its aim is to contribute to the enhancement of the efficiency and expressiveness of current visualization methods.

    The thesis provides a review of the state-of the art in chapter 2, and it brings four contributions in the next chapters. First, in chapter 3, we propose a volume visualization framework based on the use of 3D textures for unimodal and multimodal data. The efficiency of the method is based on a extensive use of the facilities of modern programmable graphical processing units. It loads the 3D volumes as 3D textures, and it computes a set of view-parallel cross images that give the final image after depth-composition. The classification, shading and fusion are done in specific fragment shaders. This framework supports fusion at different levels of the visualization pipeline. In addition, it provides multimodal window display, surface mapping, normal mapping and 2D fusion. In order to specify the 2D fusion transfer function, a new widget is proposed.

    The second contribution, described in chapter 4, is a method for the fusion of a time-varying modality with a static modality. Time-varying data are represented with time run-length encoding. At each frame, the 3D texture is updated, if required, and fused with the static data. The experimental results show that the animations are two times faster than without using this codification.

    The third and fourth contributions address the expressiveness of the visualizations, first for unimodal data and, next, for multimodal ones. In chapter 5, we propose a method that allows the visualization of unimodal multiclassified volumes. The method consists of two stages. In a pre-process, it clusterizes the voxel model into regions of voxels that share the same classification criteria. It constructs a graph of relationships between the different clusters. The second stage is the interactive visualization. Users define the selected regions as a regular expression composed of cluster identifiers and boolean operators. They apply different types of illustrative effects to show the relationships between the selected regions and the others. The results show that this method allows users to create images that could not have been obtained with conventional visualization methods. This method is extended to multimodal data in chapter 6. It allows users to define regions according to a combination of modalities values. It provides means of defining different transfer functions of shading and fusion in different regions. This way, in one image, different fusions can be applied in order to outline and contextualize specific features of the data. Moreover, we provide editing tools with which users paint interactively the regions in which they want to apply a particular type of fusion and shading. The result is a flexible algorithm that provides new means of exploring the relationships between modalities.


Fundación Dialnet

Mi Documat