Direkt zum Inhalt springen
login.png Login join.png Register    |
de | en
MyTUM-Portal
Technische Universität München

Technische Universität München

Sitemap > Schwarzes Brett > Abschlussarbeiten, Bachelor- und Masterarbeiten > Master Thesis: Visualization & Explainability of ML based swallow event detection and classification
auf   Zurück zu  Nachrichten-Bereich    vorhergehendes   Browse in News  nächster    

Master Thesis: Visualization & Explainability of ML based swallow event detection and classification

07.05.2024, Abschlussarbeiten, Bachelor- und Masterarbeiten

Master’s thesis in the area of Medical Data Analysis, Visualization & Explainability at Research Group MITI, University Hospital rechts der Isar, TUM

Background:

Patients with benign esophageal diseases often endure lengthy medical journeys before receiving a definitive diagnosis. The diagnostic process entails a spectrum of examinations, with high-resolution manometry (HRM) emerging as the foremost method for detecting esophageal motility disorders. In a prior study, we introduced a Deep Learning-based approach for identifying and clustering swallow events within long-term HRM datasets, enabling subsequent classification of the swallows [1]. Given the clinical significance of these findings, there's a need to present them optimally for medical experts' scrutiny and evaluation. For high trustworthiness, the presented results should also include the reasoning of the model, making the results more explainable to the end user.

Goal:

This thesis project aims to design and develop a tool for visualizing the medically relevant outcomes of our swallow detection and classification methodology. The visualization should effectively convey all relevant details. The requirements should first be defined through interviews with medical experts. Subsequently, the tool will be implemented to meet these specifications. Furthermore, recognizing the importance of explainability in Deep Learning systems, particularly in medical contexts, the thesis should explore methods to incorporate explainability features into the visualization, enhancing the tool's credibility. Ultimately, the developed tool should undergo evaluation through a user study to quantify its medical impact.

Tasks:

  • Conduct a review on potential existing analogous tools and solutions.
  • Identify the concrete requirements (e.g. through interviews) and develop the visualization tool.
  • Investigate and implement strategies for integrating explainability into the system.
  • Evaluate the tool through a user study.
  • Produce written documentation detailing the methods and outcomes (thesis).

Requirements:

  • Proficiency in app development.
  • Experience with Python, along with expertise in Machine Learning and Computer Vision and corresponding common libraries (e.g. PyTorch)
  • First experience in the area of Explainable AI is preferred.
  • A keen interest in medicine and applied research.
  • Self-motivation and an ability to work independently.

[1] https://arxiv.org/pdf/2405.01126

Please contact us if you are interested in this topic.

Kontakt: alexander.geiger@tum.de

Termine heute

no events today.

Veranstaltungskalender