Master’s Thesis Opportunity: Differential Privacy for Interpretability
Research Questions
Primary Question: How can DP be leveraged as a constraint to understand the hierarchical nature of concept learning in machine learning models? Can DP be used to improve model interpretability?
Secondary Questions:
- How can we effectively identify and analyze the learning of specific concepts under DP constraints?
- What metrics can be used to quantify concept acquisition?
- How does (the level of) privacy influence the order and granularity of learned concepts?
Methods & Tools
The thesis will involve a combination of theoretical analysis and empirical evaluation. Possible methodologies include:
- Implementation in Python.
- Qualitative analysis of learned concepts and identification of relevant subpopulations.
- Development of quantitative metrics to evaluate concept acquisition under varying DP constraints.
Prerequisites
Required:
- Strong motivation
- Strong programming skills in Python
- Background in machine learning and deep learning
Preferred:
- Experience with privacy-preserving ML frameworks
- Familiarity with differential privacy fundamentals
- Knowledge of interpretable AI techniques
- Good mathematical foundations
What We Offer
- Access to computing resources and datasets for experimentation
- Support from experienced researchers in DP and interpretability
- Potential opportunities to publish findings in AI/ML conferences
Application Process
Interested students should send the following to sarah.lockfisch@tum.de and jonas.kuntzer@tum.de:
- A brief motivation letter (1 page max)
- Your CV
- Academic transcripts
- Any relevant project samples or publications (if available)
For further questions, feel free to reach out to Sarah Lockfisch or Jonas Kuntzer.
Contact