Master's thesis/IDP/Guided Research/Working student: Title: Enhancing Neuroradiology Reports with Agentic Deep Learning and Vision-Language Models
This project is conducted in collaboration with Cornell University and the Radiology Department at Presbyterian Hospital in New York City.
It can be pursued as a Master’s Thesis, Guided Research/IDP, or a Working Student position—or any combination of these formats (a total duration of more than 6 months is preferred).
Background
Radiological imaging plays a crucial role in diagnosing and treating conditions such as various metastatic cancers or multiple sclerosis (MS), where clinicians must analyze and describe complex, high-dimensional MRI data. These assessments are often labor-intensive, variable in quality, and seldom capitalize on the depth of information available in longitudinal imaging sequences and their corresponding textual reports—both of which are frequently noisy, incomplete, or inconsistently organized. While recent strides in large-scale vision-language models have produced systems that exceed expert performance in specific tasks like diagnostic interpretation and report summarization, substantial obstacles remain. As a result, current methodologies are still largely misaligned with real-world clinical workflows and lack robust validation in critical fields such as neuroradiology. This project aims to develop and clinically validate an agentic decision-support system that integrates deep learning models into radiological practice.
Objective of the thesis
The aim of your work is to develop deep learning methods for solving specific (neuro-)radiological tasks (e.g., metastasis detection), integrate these models into a multi-agent VLM framework, and deploy a prototype within real clinical workflows. Depending on your interests and skill set, the focus can be on either developing deep learning methods (Track A) or building and deploying a clinical prototype (Track B).
What we offer
- Collaboration on an international and interdisciplinary project involving Cornell University, Presbyterian Hospital, and TUM
- The opportunity to make a meaningful clinical impact
- A chance to contribute to an ongoing project with the goal of publishing in top-tier venues
- Close supervision and access to state-of-the-art computing infrastructure
- Potential transition into a PhD project at TUM
Requirements
Depending on your chosen track (A or B), the position has different prerequisites.
For Track A, you should have a solid background in deep learning, image analysis, CNNs/Transformers, and experience with frameworks such as PyTorch. For Track B, you should have a background in software engineering and experience in developing and deploying applications (e.g., JavaScript/React or similar). Experience with PACS and clinical applications is a plus. Regardless of the track, you should be highly motivated to drive the project forward, contribute your own ideas, and be enthusiastic about teamwork and interdisciplinary research.
How to apply
Please send your CV and transcript to Alex (a.berger[at]tum.de).
Include a brief but specific summary of your relevant previous projects.
If available, provide links to relevant GitHub repositories.
Also, indicate which track you are most interested in.
References
Tanno, Ryutaro, et al. "Collaboration between clinicians and vision–language models in radiology report generation." Nature Medicine 31.2 (2025): 599-608.
Singhal, Karan, et al. "Toward expert-level medical question answering with large language models." Nature Medicine 31.3 (2025): 943-950.
Van Veen, Dave, et al. "Adapted large language models can outperform medical experts in clinical text summarization." Nature medicine 30.4 (2024): 1134-1142.
Yu, Feiyang, et al. "Heterogeneity and predictors of the effects of AI assistance on radiologists." Nature Medicine 30.3 (2024): 837-849.
Menze, Bjoern H., et al. "The multimodal brain tumor image segmentation benchmark (BRATS)." IEEE transactions on medical imaging 34.10 (2014): 1993-2024.
Hamamci, Ibrahim Ethem, Sezgin Er, and Bjoern Menze. "Ct2rep: Automated radiology report generation for 3d medical imaging." International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer Nature Switzerland, 2024.
