Ir al contenido

Documat


Personalized medicine in surgical treatments combining tracking systems, augmented reality and 3d printing

  • Autores: Rafael Moreta Martínez
  • Directores de la Tesis: Javier Pascau González-Garzón (dir. tes.) Árbol académico
  • Lectura: En la Universidad Carlos III de Madrid ( España ) en 2021
  • Idioma: español
  • Tribunal Calificador de la Tesis: María Jesús Ledesma Carbayo (presid.) Árbol académico, Arrate Muñoz Barrutia (secret.) Árbol académico, Csaba Pinter (voc.) Árbol académico
  • Enlaces
  • Resumen
    • In the last twenty years, a new way of practicing medicine has been focusing on the problems and needs of each patient as an individual, thanks to the significant advances in healthcare technology, the so-called personalized medicine. In surgical treatments, personalization has been possible thanks to key technologies adapted to the specific anatomy of each patient and the needs of the physicians. Tracking systems, augmented reality (AR), three-dimensional (3D) printing and artificial intelligence (AI) have supported this individualized medicine in many ways.

      Tracking devices, such as optical tracking systems, are an essential part of surgical navigation thanks to their versatility, size, and precision when they are used for real-time surgical tool tracking. These devices are included in commercial navigation systems and offer solutions in many surgical treatments where rigid structures, such as bone, are involved. However, these systems lack flexibility and cannot be adapted to the requirements of every case, limiting their utility. For example, current patient registration methodologies offered by commercial systems are impossible to adapt to cases where extremities are involved due to their large number of joints with complex movements. The combination of open-source software and tracking devices could be an alternative in cases where commercial systems are not enough.

      Augmented reality technology has brought tangible benefits in medical procedures thanks to the interaction between virtual 3D models and the real environment, improving spatial vision and surgical ability. Furthermore, AR applications have expanded to surgical treatments, guiding surgeons during clinical interventions in many medical fields, such as maxillofacial, orthopedic or neurosurgery. Although hands-free AR devices, such as the HoloLens, are leading the way AR should be perceived, we are still far from their everyday use in the medical field, and more possible applications need to be explored. For example, the adoption by medical professionals is still restricted since they require extensive knowledge of engineering and software development, limiting their use in surgical planning and patient communication. Moreover, there are some restraints if they are applied in surgical procedures in terms of portability, calibration and tracking. These disadvantages make it difficult to register the augmented data with the real-world space. Up to now, patient registration has been achieved by different methodologies. These solutions seem to work in some specific applications, but require extra hardware, add complexity to the workflow, increase procedure time, and may not be accurate enough making their implementation impossible in more cases.

      3D printing technology has revolutionized the medical field allowing anyone to easily convert 3D models created from medical imaging studies into physical objects using a layering technique. The direct interaction with the patient’s anatomy is especially important in surgery, where 3D printed patient-specific anatomical models improve spatial perception, enhancing the results in preoperative planning and clinical interventions in many areas. Furthermore, unique and specific surgical instruments can be designed and fabricated, reducing manufacturing time and costs, improving customization according to surgeon’s needs. This has moved hospitals towards implementing their own in-house 3D printing hubs, increasing productivity and reducing both cost and delivery times of the 3D printed models. This new trend has a significant value in personalized medicine by giving physicians resources to create patient-specific tools that could improve medical outcomes, adapting to the needs of both the patient and the physicians.

      Segmentation of the patient’s anatomy from medical imaging is essential for personalization in surgical treatments since it provides the location and quantification of healthy organs or pathological lesions. The obtention of virtual 3D models of the affected anatomy is crucial to represent important structures during surgical navigation. Nevertheless, current segmentation procedures are based on manual or semi-automatic methods, which require ample time from the medical professionals involved in this step. Therefore, there is a need to reduce processing time during contouring tasks in preoperative planning. Artificial intelligence (AI) has overcome this issue during the past years by using new computational power to automatically perform these cumbersome tasks, obtaining relevant information from the patients’ medical images almost instantaneously. However, the implementation of these techniques has only been validated on local institutions, and it is not straightforward to spread their use into more clinical workflows.

      These technologies used independently are precious in surgical treatments. However, their combination could improve navigation accuracy, reduce preoperative times and navigation complexity, and add surgical value. We believe that their use is still limited and that, combined, they can have an essential role in surgical guidance. Therefore, the main objective of this thesis is to increase patient personalization in surgical treatments by combining these technologies to bring surgical navigation to new complex cases by developing new patient registration methods, designing patient-specific tools, facilitating access to augmented reality by the medical community, and automating surgical workflows.

      In the first part of this dissertation, we present a novel framework for acral tumor resection (those located in distal extremities such as hands or feet) combining intraoperative open-source navigation software, based on an optical tracking system, and desktop 3D printing. We used additive manufacturing to create a patient-specific mold that fixes the position of the distal extremity during preoperative imaging and image-guided surgery. An open-source navigation software has been adapted for guiding the resection of these tumors.

      The proposed molds are 3D printed in polylactic acid (PLA) material and sterilized before surgery. This process may still deform the objects decreasing navigation accuracy. We compared the virtual 3D models obtained from CT images acquired before and after the sterilization process to evaluate this error. The results showed that the sterilization did not significantly deform the 3D printing molds. Moreover, the feasibility of the proposed workflow was assessed in two clinical cases (soft-tissue sarcomas in hand and foot). We achieved an overall accuracy of the system of 1.88 mm evaluated on the patient-specific 3D-printed phantoms. Surgical navigation was then implemented on the surgical procedures, showing that the proposed surgical workflow could be easily integrated into the operating room to ensure proper margin during acral tumor removal. Surgeons gave positive feedback, suggesting that our solution could benefit acral tumor resection by reducing errors and increasing surgeons’ confidence.

      The second part of the thesis proposes an augmented reality navigation system that uses desktop 3D printing to create patient-specific tools with a tracking marker attached, enabling automatic patient-to-image registration in orthopedic oncology. This specific tool fits on the patient only in a pre-designed location, in this case, bone tissue. This solution has been developed as a software application running on Microsoft HoloLens. The HoloLens is an AR device that allows hands-free interaction displaying virtual information on a semitransparent glass in front of the user’s eyes. The workflow was validated on a 3D printed phantom replicating a patient’s anatomy presenting an extraosseous Ewing’s sarcoma of the distal leg and then tested during the actual surgical intervention.

      First, the registration method was validated on the 3D printed patient-specific phantom, showing that the surgical guide position on the bone could be replicated with an accuracy of 2 mm. The AR visualization error was validated on the same phantom using an optical tracking system as a gold standard, obtaining an error lower than 3 mm. The final solution was then tested during the actual surgery, where surgeons obtained a fast automatic patient-to-image registration with the surgical guide. They could visualize the skin, affected bone, tumor and the medical images right on top of the patient during the intervention. Our results illustrated the feasibility of the proposed AR solution for surgical guidance in cases where rigid body structures such as bone are accessible.

      After this study, we realized that devices such as Microsoft HoloLens can bring AR features to surgical treatments in ways that were not imaginable before. However, these gadgets are still in their early technological stages, and their price limits the adoption by physicians. On the other hand, smartphones could be used as AR devices, accelerating the spread of this technology in healthcare. Nevertheless, developing AR solutions requires advanced software engineering skills. To solve this issue, in the next section of the thesis, we have presented a step-by-step methodology that enables the use of AR and 3D printing by inexperienced users without broad technical knowledge. The proposed protocol describes how to develop an AR smartphone app that allows superimposing any patient-based 3D model onto a real-world environment using a 3D printed marker tracked by the smartphone camera. In addition, we describe an alternative approach for automatic registration between a 3D printed biomodel (i.e., a 3D model created from a patient’s anatomy) and the projected holograms.

      The protocol first describes how to create 3D virtual models of a patient’s anatomy derived from 3D medical images. It then explains how to perform the positioning of the 3D models with respect to marker references. We also provide instructions on how to 3D print the required tools and models. Finally, steps to deploy the app are given. The protocol is based on free and multi-platform software and can be applied to any medical imaging modality or patient. Furthermore, we showed four clinical cases where the final AR application benefits surgical specialties such as maxillofacial, neuro, or orthopedics. We believe that our protocol contributes to accelerating the adoption of AR by medical professionals. In less than two years after the publication of the video article, we have reached more than 10.000 views, demonstrating an incredible interest in these technologies in the biomedical field.

      In the fourth section of the thesis, we wanted to show the benefits of combining these technologies during different stages of the surgical workflow in orthopedic oncology. We developed a novel AR-based smartphone application that can display the patient’s anatomy and the tumor’s location using a cubic 3D printed reference marker. Automatic registration between virtual and real-world is achieved by patient-specific surgical guides (with support for the reference marker) that fit in a unique region of the affected bone tissue. The reference marker has a cubic shape allowing visualization of registered virtual data from different points of view and providing freedom of movement. To evaluate the contribution of both technologies, we tested the proposed system on six 3D printed patient-specific phantoms obtained from orthopedics tumors in a variety of anatomical locations. The solution was clinically evaluated during the whole surgical workflow (surgical planning, patient communication, and surgical intervention) in two patients.

      Two users verified the uniqueness of the 3D printed surgical guide positioning on the bone of the 3D printed phantoms, achieving a mean error of 1.75 mm. Then, the AR tracking error was validated on the same phantoms using the smartphone AR application, obtaining a mean error of 2.80 mm. The low errors achieved in our system encourage us to believe that virtual models can be displayed with enough accuracy on top of the patient to improve different steps of the surgical workflow.

      After this phantom evaluation, the methodology was favorably tested during the complete clinical workflow on the two clinical cases. According to the surgeons’ feedback, the AR 3D visualization allowed establishing the surgical strategy more confidently during preoperative planning. Both patients welcomed this technology to understand their pathology better, and surgeons found it very useful to explain the surgical approach. The AR system displayed the corresponding tumor position on top of the patient with virtual anatomical elements during surgery, boosting surgeons’ confidence to verify that the tumor had been adequately resected. These results and the positive feedback obtained from surgeons and patients suggest that the combination of AR and 3D printing can improve efficacy, accuracy, and patients’ experience.

      In the final section of the dissertation, two surgical navigation systems have been developed and evaluated to guide electrode placement in SNS surgeries designed to reduce surgical time, minimize patient discomfort and improve surgical outcomes. We developed an open-source navigation software for the first alternative to guide electrode placement by real-time needle tracking with an optical tracking system. In the second method, we present a smartphone-based AR application that displays virtual guidance elements directly on the affected area, using a 3D printed reference marker placed on the patient, facilitating needle insertion with a predefined trajectory. Both techniques were evaluated to determine which one obtained better results than the current surgical procedure regarding the number of insertions, accuracy, procedure duration, and radiation exposure. To compare the proposals with the clinical method, we developed an x-ray software tool that calculates direct reconstruction radiograph (DRR), simulating the fluoroscopy acquisitions during the procedure.

      Twelve physicians (inexperienced and experienced users) performed needle insertions through a specific target to evaluate the alternative SNS guidance methods on a realistic patient-based phantom. With each navigation solution, we observed that users took less average time to complete each insertion (36.83 seconds and 44.43 seconds for the OTS and AR methods, respectively) and needed fewer average punctures to reach the target (1.23 and 1.96 for the OTS and AR methods respectively) than following the standard clinical method (189.28 seconds and 3.65 punctures). Our results show that both systems could minimize patient discomfort and improve surgical outcomes by reducing needle insertion time and number of punctures.

      Finally, we proposed a feasible clinical workflow to use both navigation methodologies during a clinical intervention. One step of the workflow included the segmentation of the sacral bone right after the CT acquisition. We developed an AI algorithm that can automatically segment the sacrum mask from the medical image to create the virtual 3D model used to define the needle trajectory before surgery. This automatic method could reduce delays in the intervention and decrease stress experienced by the staff involved in the procedure. The segmentation results were accurate compared with the values reported in the literature on the same database. Additionally, the proposed clinical workflow offers feasible intraoperative patient-to-image registration, solving the major problem encountered in these scenarios.

      Although AR seems a promising alternative to replace current surgical navigation technologies, more traditional navigation systems are still better in some cases. Although needing to divert the patient's attention when looking at the navigation display, OTS had a better response from the physicians, being more intuitive and easier to learn. AR is a promising alternative for teaching and training. However, more improvements are needed on AR to replace traditional navigation, which is still the best solution in the near future.

      Right now, we are living a technological transition in surgical treatments. AI has shown an unprecedented capacity to automatize medical image analysis, accelerating personalization in many clinical treatments. Conventional navigation devices will probably disappear, letting the path to augmented reality systems once these devices provide substantial benefits. 3D printing is here to stay and, indeed, it will accelerate personalization in hospitals both for patients and physicians. Finally, patient-to-image registration and tool tracking will be markerless thanks to the use of AI.

      To conclude, in this thesis, we have demonstrated that the combination of technologies such as tracking systems, augmented reality, 3D printing, and artificial intelligence overcomes many current limitations in surgical treatments. Our results encourage the medical community to combine these technologies to improve surgical workflows and outcomes in more clinical scenarios.


Fundación Dialnet

Mi Documat

Opciones de tesis

Opciones de compartir

Opciones de entorno