Ir al contenido

Documat


Healthcare data heterogeneity and its contribution to machine learning performance

  • Autores: Francisco Javier Pérez Benito
  • Directores de la Tesis: Juan Miguel García Gómez (dir. tes.) Árbol académico, José Alberto Conejero Casares (dir. tes.) Árbol académico
  • Lectura: En la Universitat Politècnica de València ( España ) en 2020
  • Idioma: español
  • Tribunal Calificador de la Tesis: Luis José Rodríguez Muñiz (presid.) Árbol académico, Miguel Angel García March (secret.) Árbol académico, Consuelo Borrás Blasco (voc.) Árbol académico
  • Enlaces
    • Tesis en acceso abierto en: RiuNet
  • Resumen
    • The data quality assessment has many dimensions, from those so obvious as the data completeness and consistency to other less evident such as the correctness or the ability to represent the target population. In general, it is possible to classify them as those produced by an external effect, and those that are inherent in the data itself. This work will be focused on those inherent to data, such as the temporal and the multisource variability applied to healthcare data repositories. Every process is usually improved over time, and that has a direct impact on the data distribution. Similarly, how a process is executed in different sources may vary due to many factors, such as the diverse interpretation of standard protocols by human beings or different previous experiences of experts.

      Artificial Intelligence has become one of the most widely extended technological paradigms in almost all the scientific and industrial fields. Advances not only in models but also in hardware have led to their use in almost all areas of science. Although the solved problems using this technology often have the drawback of not being interpretable, or at least not as much as other classical mathematical or statistical techniques. This motivated the emergence of the "explainable artificial intelligence" concept, that study methods to quantify and visualize the training process of models based on machine learning.

      On the other hand, real systems may often be represented by large networks (graphs), and one of the most relevant features in such networks is the community or clustering structure. Since sociology, biology, or clinical situations could usually be modeled using graphs, community detection algorithms are becoming more and more extended in a biomedical field.

      In the present doctoral thesis, contributions have been made in the three above mentioned areas. On the one hand, temporal and multisource variability assessment methods based on information geometry were used to detect variability in data distribution that may hinder data reuse and, hence, the conclusions which can be extracted from them. This methodology's usability was proved by a temporal variability analysis to detect data anomalies in the electronic health records of a hospital over 7 years.

      Besides, it showed that this methodology could have a positive impact if it applied previously to any study. To this end, firstly, we extracted the variables that highest influenced the intensity of headache in migraine patients using machine learning techniques. One of the principal characteristics of machine learning algorithms is its capability of fitting the training set. In those datasets with a small number of observations, the model can be biased by the training sample. The observed variability, after the application of the mentioned methodology and considering as sources the registries of migraine patients with different headache intensity, served as evidence for the truthfulness of the extracted features. Secondly, such an approach was applied to measure the variability among the gray-level histograms of digital mammographies. We demonstrated that the acquisition device produced the observed variability, and after defining an image preprocessing step, the performance of a deep learning model, which modeled a marker of breast cancer risk estimation, increased.

      Given a dataset containing the answers to a survey formed by psychometric scales, or in other words, questionnaires to measure psychologic factors, such as depression, cope, etcetera, two deep learning architectures that used the data structure were defined. Firstly, we designed a deep learning architecture using the conceptual structure of such psychometric scales. This architecture was trained to model the happiness degree of the participants, improved the performance compared to classical statistical approaches. A second architecture, automatically designed using community detection in graphs, was not only a contribution


Fundación Dialnet

Mi Documat

Opciones de tesis

Opciones de compartir

Opciones de entorno