Sonia Laguna
Sonia Laguna

PhD in Machine Learning @ETH Zurich

Previously @Cambridge @Google @Harvard

About Me

Hi there!👋 I am a PhD student in Machine Learning at ETH Zurich, supervised by Prof. Julia Vogt (ETH) and Prof. Bernhard Schölkopf (Max Planck Institute). My research lies at the intersection of generative modeling, representation learning, interpretability, and machine unlearning . Broadly, I study how modern machine learning models structure information internally, and how these representations can be used to make models more controllable, understandable, and responsible. My work spans generative models such as diffusion models, VAEs, and LLMs, as well as multimodal and vision language foundation models, with a particular emphasis on structured representations, concept based interpretability, and machine unlearning through selective forgetting.

During my PhD, I have been a visiting student at Cambridge University with Prof. Mihaela Van der Shaar, working on alignment and interpretability of LLMs. Additionally, I have been a Research Intern and a Student Researcher at Google, developing 3D diffusion-based generative models in the AR&VR team, and the team co-leader of CSNOW, Computer Science Network of Women at ETH.

Prior to my doctoral studies, I obtained a MSc in the Department of Information Technology and Electrical Engineering at ETH Zurich, and spent a semester at Harvard University working on 3D generative models for super-resolution of MR images. I was lucky to be supported by two Spanish Excellence Fellowships, La Caixa and Rafel del Pino. Before that, I completed my BSc in Biomedical Engineering at Universidad Carlos III de Madrid, spent one year at Georgia Institute of Technology, and carried out an internship at ETH Zurich as an Amgen Scholar.

I am always happy to collaborate and discuss new topics, feel free to reach out! 😃💡

Interests
  • Representation Learning
  • Machine Unlearning
  • Generative Models
  • Interpretability
🗞️ Recent News

See all News »

📝 Featured Publications

See all Publications » or checkout my Google Scholar »

(2026). Structure is Supervision: Multiview Masked Autoencoders for Radiology. In Transactions in Machine Learning Research (TMLR) 2026.
(2026). Rethinking Machine Unlearning: Models Designed to Forget via Key Deletion. In ICLR 2026 Workshop TTU (Oral).
(2026). Reference-Guided Machine Unlearning. In ICLR 2026 Workshop AIWILD.
(2024). Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable. In NeurIPS 2024.
(2024). Stochastic Concept Bottleneck Models. In NeurIPS 2024.